Test Report: KVM_Linux_crio 17585

                    
                      ea770f64c27c5646b2ec1dfcd286218478f671de:2023-11-08:31788
                    
                

Test fail (28/294)

Order failed test Duration
28 TestAddons/parallel/Ingress 158.06
41 TestAddons/StoppedEnableDisable 155.48
107 TestFunctional/parallel/License 0.28
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.45
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 172.79
205 TestMultiNode/serial/PingHostFrom2Pods 3.22
211 TestMultiNode/serial/RestartKeepsNodes 682.17
213 TestMultiNode/serial/StopMultiNode 143.16
220 TestPreload 250.01
226 TestRunningBinaryUpgrade 177.15
242 TestPause/serial/SecondStartNoReconfiguration 58.37
264 TestStoppedBinaryUpgrade/Upgrade 264.94
269 TestStartStop/group/old-k8s-version/serial/Stop 139.48
275 TestStartStop/group/no-preload/serial/Stop 140.31
277 TestStartStop/group/embed-certs/serial/Stop 139.75
281 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
283 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
284 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.39
289 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.05
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
292 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.64
293 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.91
294 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.07
295 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.1
296 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 469.89
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 400.26
298 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.47
299 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 130.33
x
+
TestAddons/parallel/Ingress (158.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-245409 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-245409 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-245409 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5c192263-36d7-41b2-9be7-c4e7a400b6f4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5c192263-36d7-41b2-9be7-c4e7a400b6f4] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.012632328s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-245409 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.917961217s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-245409 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.205
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-245409 addons disable ingress-dns --alsologtostderr -v=1: (1.00523995s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-245409 addons disable ingress --alsologtostderr -v=1: (7.753500093s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-245409 -n addons-245409
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-245409 logs -n 25: (1.347061044s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-759760 | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |                     |
	|         | -p download-only-759760                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 07 Nov 23 23:02 UTC | 07 Nov 23 23:02 UTC |
	| delete  | -p download-only-759760                                                                     | download-only-759760 | jenkins | v1.32.0 | 07 Nov 23 23:02 UTC | 07 Nov 23 23:02 UTC |
	| delete  | -p download-only-759760                                                                     | download-only-759760 | jenkins | v1.32.0 | 07 Nov 23 23:02 UTC | 07 Nov 23 23:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-367625 | jenkins | v1.32.0 | 07 Nov 23 23:02 UTC |                     |
	|         | binary-mirror-367625                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44133                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-367625                                                                     | binary-mirror-367625 | jenkins | v1.32.0 | 07 Nov 23 23:02 UTC | 07 Nov 23 23:02 UTC |
	| addons  | disable dashboard -p                                                                        | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:02 UTC |                     |
	|         | addons-245409                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:02 UTC |                     |
	|         | addons-245409                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-245409 --wait=true                                                                | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:02 UTC | 07 Nov 23 23:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:05 UTC | 07 Nov 23 23:05 UTC |
	|         | -p addons-245409                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:05 UTC | 07 Nov 23 23:05 UTC |
	|         | addons-245409                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:05 UTC | 07 Nov 23 23:05 UTC |
	|         | -p addons-245409                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-245409 ssh cat                                                                       | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC | 07 Nov 23 23:06 UTC |
	|         | /opt/local-path-provisioner/pvc-edd8fb6e-35c5-4be0-b56c-b28712df861d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-245409 addons disable                                                                | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC | 07 Nov 23 23:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-245409 ip                                                                            | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC | 07 Nov 23 23:06 UTC |
	| addons  | addons-245409 addons disable                                                                | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC | 07 Nov 23 23:06 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-245409 addons                                                                        | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC | 07 Nov 23 23:06 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC | 07 Nov 23 23:06 UTC |
	|         | addons-245409                                                                               |                      |         |         |                     |                     |
	| addons  | addons-245409 addons disable                                                                | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC | 07 Nov 23 23:06 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-245409 addons                                                                        | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC | 07 Nov 23 23:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-245409 addons                                                                        | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC | 07 Nov 23 23:06 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-245409 ssh curl -s                                                                   | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-245409 ip                                                                            | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:08 UTC | 07 Nov 23 23:08 UTC |
	| addons  | addons-245409 addons disable                                                                | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:08 UTC | 07 Nov 23 23:08 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-245409 addons disable                                                                | addons-245409        | jenkins | v1.32.0 | 07 Nov 23 23:08 UTC | 07 Nov 23 23:08 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:02:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:02:10.075197   17313 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:02:10.075449   17313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:02:10.075466   17313 out.go:309] Setting ErrFile to fd 2...
	I1107 23:02:10.075475   17313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:02:10.075995   17313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1107 23:02:10.076654   17313 out.go:303] Setting JSON to false
	I1107 23:02:10.077462   17313 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2679,"bootTime":1699395451,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:02:10.077518   17313 start.go:138] virtualization: kvm guest
	I1107 23:02:10.079586   17313 out.go:177] * [addons-245409] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:02:10.080967   17313 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:02:10.082335   17313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:02:10.080967   17313 notify.go:220] Checking for updates...
	I1107 23:02:10.083623   17313 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:02:10.085021   17313 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:02:10.086287   17313 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:02:10.087503   17313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:02:10.088869   17313 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:02:10.119850   17313 out.go:177] * Using the kvm2 driver based on user configuration
	I1107 23:02:10.121294   17313 start.go:298] selected driver: kvm2
	I1107 23:02:10.121309   17313 start.go:902] validating driver "kvm2" against <nil>
	I1107 23:02:10.121319   17313 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:02:10.121949   17313 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:02:10.122035   17313 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:02:10.135539   17313 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:02:10.135587   17313 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:02:10.135763   17313 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:02:10.135822   17313 cni.go:84] Creating CNI manager for ""
	I1107 23:02:10.135834   17313 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:02:10.135845   17313 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1107 23:02:10.135854   17313 start_flags.go:323] config:
	{Name:addons-245409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-245409 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:02:10.135974   17313 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:02:10.137803   17313 out.go:177] * Starting control plane node addons-245409 in cluster addons-245409
	I1107 23:02:10.139081   17313 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:02:10.139109   17313 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:02:10.139118   17313 cache.go:56] Caching tarball of preloaded images
	I1107 23:02:10.139197   17313 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:02:10.139211   17313 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:02:10.139507   17313 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/config.json ...
	I1107 23:02:10.139527   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/config.json: {Name:mkc1f16f40de86c2f533d1ebdf63a751cd7d8501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:10.139669   17313 start.go:365] acquiring machines lock for addons-245409: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:02:10.139725   17313 start.go:369] acquired machines lock for "addons-245409" in 39.145µs
	I1107 23:02:10.139748   17313 start.go:93] Provisioning new machine with config: &{Name:addons-245409 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-245409
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:02:10.139810   17313 start.go:125] createHost starting for "" (driver="kvm2")
	I1107 23:02:10.141448   17313 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1107 23:02:10.141567   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:02:10.141609   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:02:10.154389   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44649
	I1107 23:02:10.154772   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:02:10.155245   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:02:10.155267   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:02:10.155583   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:02:10.155754   17313 main.go:141] libmachine: (addons-245409) Calling .GetMachineName
	I1107 23:02:10.155893   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:02:10.156033   17313 start.go:159] libmachine.API.Create for "addons-245409" (driver="kvm2")
	I1107 23:02:10.156063   17313 client.go:168] LocalClient.Create starting
	I1107 23:02:10.156093   17313 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem
	I1107 23:02:10.305539   17313 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem
	I1107 23:02:10.437086   17313 main.go:141] libmachine: Running pre-create checks...
	I1107 23:02:10.437108   17313 main.go:141] libmachine: (addons-245409) Calling .PreCreateCheck
	I1107 23:02:10.437625   17313 main.go:141] libmachine: (addons-245409) Calling .GetConfigRaw
	I1107 23:02:10.438052   17313 main.go:141] libmachine: Creating machine...
	I1107 23:02:10.438067   17313 main.go:141] libmachine: (addons-245409) Calling .Create
	I1107 23:02:10.438197   17313 main.go:141] libmachine: (addons-245409) Creating KVM machine...
	I1107 23:02:10.439343   17313 main.go:141] libmachine: (addons-245409) DBG | found existing default KVM network
	I1107 23:02:10.439979   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:10.439835   17335 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1107 23:02:10.445299   17313 main.go:141] libmachine: (addons-245409) DBG | trying to create private KVM network mk-addons-245409 192.168.39.0/24...
	I1107 23:02:10.510050   17313 main.go:141] libmachine: (addons-245409) DBG | private KVM network mk-addons-245409 192.168.39.0/24 created
	I1107 23:02:10.510082   17313 main.go:141] libmachine: (addons-245409) Setting up store path in /home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409 ...
	I1107 23:02:10.510109   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:10.510021   17335 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:02:10.510123   17313 main.go:141] libmachine: (addons-245409) Building disk image from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1107 23:02:10.510152   17313 main.go:141] libmachine: (addons-245409) Downloading /home/jenkins/minikube-integration/17585-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1107 23:02:10.731696   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:10.731516   17335 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa...
	I1107 23:02:10.898441   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:10.898282   17335 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/addons-245409.rawdisk...
	I1107 23:02:10.898487   17313 main.go:141] libmachine: (addons-245409) DBG | Writing magic tar header
	I1107 23:02:10.898508   17313 main.go:141] libmachine: (addons-245409) DBG | Writing SSH key tar header
	I1107 23:02:10.899267   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:10.899148   17335 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409 ...
	I1107 23:02:10.899725   17313 main.go:141] libmachine: (addons-245409) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409
	I1107 23:02:10.899759   17313 main.go:141] libmachine: (addons-245409) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409 (perms=drwx------)
	I1107 23:02:10.899778   17313 main.go:141] libmachine: (addons-245409) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines (perms=drwxr-xr-x)
	I1107 23:02:10.899793   17313 main.go:141] libmachine: (addons-245409) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines
	I1107 23:02:10.899811   17313 main.go:141] libmachine: (addons-245409) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:02:10.899827   17313 main.go:141] libmachine: (addons-245409) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647
	I1107 23:02:10.899856   17313 main.go:141] libmachine: (addons-245409) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube (perms=drwxr-xr-x)
	I1107 23:02:10.899890   17313 main.go:141] libmachine: (addons-245409) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647 (perms=drwxrwxr-x)
	I1107 23:02:10.899905   17313 main.go:141] libmachine: (addons-245409) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1107 23:02:10.899920   17313 main.go:141] libmachine: (addons-245409) DBG | Checking permissions on dir: /home/jenkins
	I1107 23:02:10.899930   17313 main.go:141] libmachine: (addons-245409) DBG | Checking permissions on dir: /home
	I1107 23:02:10.899939   17313 main.go:141] libmachine: (addons-245409) DBG | Skipping /home - not owner
	I1107 23:02:10.899947   17313 main.go:141] libmachine: (addons-245409) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1107 23:02:10.899953   17313 main.go:141] libmachine: (addons-245409) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1107 23:02:10.899960   17313 main.go:141] libmachine: (addons-245409) Creating domain...
	I1107 23:02:10.900832   17313 main.go:141] libmachine: (addons-245409) define libvirt domain using xml: 
	I1107 23:02:10.900855   17313 main.go:141] libmachine: (addons-245409) <domain type='kvm'>
	I1107 23:02:10.900866   17313 main.go:141] libmachine: (addons-245409)   <name>addons-245409</name>
	I1107 23:02:10.900879   17313 main.go:141] libmachine: (addons-245409)   <memory unit='MiB'>4000</memory>
	I1107 23:02:10.900892   17313 main.go:141] libmachine: (addons-245409)   <vcpu>2</vcpu>
	I1107 23:02:10.900905   17313 main.go:141] libmachine: (addons-245409)   <features>
	I1107 23:02:10.900917   17313 main.go:141] libmachine: (addons-245409)     <acpi/>
	I1107 23:02:10.900926   17313 main.go:141] libmachine: (addons-245409)     <apic/>
	I1107 23:02:10.900937   17313 main.go:141] libmachine: (addons-245409)     <pae/>
	I1107 23:02:10.900949   17313 main.go:141] libmachine: (addons-245409)     
	I1107 23:02:10.900961   17313 main.go:141] libmachine: (addons-245409)   </features>
	I1107 23:02:10.900973   17313 main.go:141] libmachine: (addons-245409)   <cpu mode='host-passthrough'>
	I1107 23:02:10.900988   17313 main.go:141] libmachine: (addons-245409)   
	I1107 23:02:10.901003   17313 main.go:141] libmachine: (addons-245409)   </cpu>
	I1107 23:02:10.901014   17313 main.go:141] libmachine: (addons-245409)   <os>
	I1107 23:02:10.901023   17313 main.go:141] libmachine: (addons-245409)     <type>hvm</type>
	I1107 23:02:10.901033   17313 main.go:141] libmachine: (addons-245409)     <boot dev='cdrom'/>
	I1107 23:02:10.901041   17313 main.go:141] libmachine: (addons-245409)     <boot dev='hd'/>
	I1107 23:02:10.901049   17313 main.go:141] libmachine: (addons-245409)     <bootmenu enable='no'/>
	I1107 23:02:10.901058   17313 main.go:141] libmachine: (addons-245409)   </os>
	I1107 23:02:10.901065   17313 main.go:141] libmachine: (addons-245409)   <devices>
	I1107 23:02:10.901092   17313 main.go:141] libmachine: (addons-245409)     <disk type='file' device='cdrom'>
	I1107 23:02:10.901116   17313 main.go:141] libmachine: (addons-245409)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/boot2docker.iso'/>
	I1107 23:02:10.901132   17313 main.go:141] libmachine: (addons-245409)       <target dev='hdc' bus='scsi'/>
	I1107 23:02:10.901143   17313 main.go:141] libmachine: (addons-245409)       <readonly/>
	I1107 23:02:10.901156   17313 main.go:141] libmachine: (addons-245409)     </disk>
	I1107 23:02:10.901167   17313 main.go:141] libmachine: (addons-245409)     <disk type='file' device='disk'>
	I1107 23:02:10.901181   17313 main.go:141] libmachine: (addons-245409)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1107 23:02:10.901201   17313 main.go:141] libmachine: (addons-245409)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/addons-245409.rawdisk'/>
	I1107 23:02:10.901217   17313 main.go:141] libmachine: (addons-245409)       <target dev='hda' bus='virtio'/>
	I1107 23:02:10.901229   17313 main.go:141] libmachine: (addons-245409)     </disk>
	I1107 23:02:10.901242   17313 main.go:141] libmachine: (addons-245409)     <interface type='network'>
	I1107 23:02:10.901255   17313 main.go:141] libmachine: (addons-245409)       <source network='mk-addons-245409'/>
	I1107 23:02:10.901280   17313 main.go:141] libmachine: (addons-245409)       <model type='virtio'/>
	I1107 23:02:10.901299   17313 main.go:141] libmachine: (addons-245409)     </interface>
	I1107 23:02:10.901313   17313 main.go:141] libmachine: (addons-245409)     <interface type='network'>
	I1107 23:02:10.901326   17313 main.go:141] libmachine: (addons-245409)       <source network='default'/>
	I1107 23:02:10.901336   17313 main.go:141] libmachine: (addons-245409)       <model type='virtio'/>
	I1107 23:02:10.901343   17313 main.go:141] libmachine: (addons-245409)     </interface>
	I1107 23:02:10.901349   17313 main.go:141] libmachine: (addons-245409)     <serial type='pty'>
	I1107 23:02:10.901357   17313 main.go:141] libmachine: (addons-245409)       <target port='0'/>
	I1107 23:02:10.901363   17313 main.go:141] libmachine: (addons-245409)     </serial>
	I1107 23:02:10.901371   17313 main.go:141] libmachine: (addons-245409)     <console type='pty'>
	I1107 23:02:10.901388   17313 main.go:141] libmachine: (addons-245409)       <target type='serial' port='0'/>
	I1107 23:02:10.901402   17313 main.go:141] libmachine: (addons-245409)     </console>
	I1107 23:02:10.901411   17313 main.go:141] libmachine: (addons-245409)     <rng model='virtio'>
	I1107 23:02:10.901422   17313 main.go:141] libmachine: (addons-245409)       <backend model='random'>/dev/random</backend>
	I1107 23:02:10.901437   17313 main.go:141] libmachine: (addons-245409)     </rng>
	I1107 23:02:10.901454   17313 main.go:141] libmachine: (addons-245409)     
	I1107 23:02:10.901468   17313 main.go:141] libmachine: (addons-245409)     
	I1107 23:02:10.901479   17313 main.go:141] libmachine: (addons-245409)   </devices>
	I1107 23:02:10.901492   17313 main.go:141] libmachine: (addons-245409) </domain>
	I1107 23:02:10.901502   17313 main.go:141] libmachine: (addons-245409) 
	I1107 23:02:10.906959   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:66:fe:62 in network default
	I1107 23:02:10.907615   17313 main.go:141] libmachine: (addons-245409) Ensuring networks are active...
	I1107 23:02:10.907640   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:10.908212   17313 main.go:141] libmachine: (addons-245409) Ensuring network default is active
	I1107 23:02:10.908495   17313 main.go:141] libmachine: (addons-245409) Ensuring network mk-addons-245409 is active
	I1107 23:02:10.908992   17313 main.go:141] libmachine: (addons-245409) Getting domain xml...
	I1107 23:02:10.909560   17313 main.go:141] libmachine: (addons-245409) Creating domain...
	I1107 23:02:12.317748   17313 main.go:141] libmachine: (addons-245409) Waiting to get IP...
	I1107 23:02:12.318431   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:12.318795   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:12.318820   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:12.318784   17335 retry.go:31] will retry after 199.28674ms: waiting for machine to come up
	I1107 23:02:12.520265   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:12.520684   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:12.520711   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:12.520639   17335 retry.go:31] will retry after 278.941451ms: waiting for machine to come up
	I1107 23:02:12.801165   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:12.801499   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:12.801526   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:12.801466   17335 retry.go:31] will retry after 393.057762ms: waiting for machine to come up
	I1107 23:02:13.195997   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:13.196491   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:13.196520   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:13.196465   17335 retry.go:31] will retry after 367.693867ms: waiting for machine to come up
	I1107 23:02:13.565878   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:13.566297   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:13.566322   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:13.566258   17335 retry.go:31] will retry after 559.091336ms: waiting for machine to come up
	I1107 23:02:14.126957   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:14.127375   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:14.127403   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:14.127332   17335 retry.go:31] will retry after 907.251168ms: waiting for machine to come up
	I1107 23:02:15.036280   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:15.036773   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:15.036799   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:15.036718   17335 retry.go:31] will retry after 981.515775ms: waiting for machine to come up
	I1107 23:02:16.019256   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:16.019723   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:16.019748   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:16.019685   17335 retry.go:31] will retry after 926.232984ms: waiting for machine to come up
	I1107 23:02:16.947883   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:16.948261   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:16.948285   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:16.948237   17335 retry.go:31] will retry after 1.464300434s: waiting for machine to come up
	I1107 23:02:18.414811   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:18.415121   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:18.415134   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:18.415100   17335 retry.go:31] will retry after 1.76783887s: waiting for machine to come up
	I1107 23:02:20.184932   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:20.185355   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:20.185383   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:20.185302   17335 retry.go:31] will retry after 2.580167923s: waiting for machine to come up
	I1107 23:02:22.766793   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:22.767255   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:22.767278   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:22.767228   17335 retry.go:31] will retry after 2.590332844s: waiting for machine to come up
	I1107 23:02:25.359324   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:25.359699   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:25.359730   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:25.359647   17335 retry.go:31] will retry after 3.649269351s: waiting for machine to come up
	I1107 23:02:29.013680   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:29.014117   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find current IP address of domain addons-245409 in network mk-addons-245409
	I1107 23:02:29.014148   17313 main.go:141] libmachine: (addons-245409) DBG | I1107 23:02:29.014069   17335 retry.go:31] will retry after 3.535806815s: waiting for machine to come up
	I1107 23:02:32.552954   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.553413   17313 main.go:141] libmachine: (addons-245409) Found IP for machine: 192.168.39.205
	I1107 23:02:32.553457   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has current primary IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.553469   17313 main.go:141] libmachine: (addons-245409) Reserving static IP address...
	I1107 23:02:32.554040   17313 main.go:141] libmachine: (addons-245409) DBG | unable to find host DHCP lease matching {name: "addons-245409", mac: "52:54:00:69:3b:12", ip: "192.168.39.205"} in network mk-addons-245409
	I1107 23:02:32.621588   17313 main.go:141] libmachine: (addons-245409) DBG | Getting to WaitForSSH function...
	I1107 23:02:32.621618   17313 main.go:141] libmachine: (addons-245409) Reserved static IP address: 192.168.39.205
	I1107 23:02:32.621659   17313 main.go:141] libmachine: (addons-245409) Waiting for SSH to be available...
	I1107 23:02:32.623945   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.624284   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:minikube Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:32.624315   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.624437   17313 main.go:141] libmachine: (addons-245409) DBG | Using SSH client type: external
	I1107 23:02:32.624457   17313 main.go:141] libmachine: (addons-245409) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa (-rw-------)
	I1107 23:02:32.624494   17313 main.go:141] libmachine: (addons-245409) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1107 23:02:32.624516   17313 main.go:141] libmachine: (addons-245409) DBG | About to run SSH command:
	I1107 23:02:32.624539   17313 main.go:141] libmachine: (addons-245409) DBG | exit 0
	I1107 23:02:32.720966   17313 main.go:141] libmachine: (addons-245409) DBG | SSH cmd err, output: <nil>: 
	I1107 23:02:32.721183   17313 main.go:141] libmachine: (addons-245409) KVM machine creation complete!
	I1107 23:02:32.721511   17313 main.go:141] libmachine: (addons-245409) Calling .GetConfigRaw
	I1107 23:02:32.722018   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:02:32.722204   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:02:32.722360   17313 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1107 23:02:32.722372   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:02:32.723919   17313 main.go:141] libmachine: Detecting operating system of created instance...
	I1107 23:02:32.723939   17313 main.go:141] libmachine: Waiting for SSH to be available...
	I1107 23:02:32.723949   17313 main.go:141] libmachine: Getting to WaitForSSH function...
	I1107 23:02:32.723958   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:32.726203   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.726522   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:32.726550   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.726673   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:32.726815   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:32.726946   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:32.727058   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:32.727189   17313 main.go:141] libmachine: Using SSH client type: native
	I1107 23:02:32.727533   17313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1107 23:02:32.727548   17313 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1107 23:02:32.839795   17313 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:02:32.839821   17313 main.go:141] libmachine: Detecting the provisioner...
	I1107 23:02:32.839837   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:32.842326   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.842673   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:32.842705   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.842818   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:32.842999   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:32.843197   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:32.843418   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:32.843615   17313 main.go:141] libmachine: Using SSH client type: native
	I1107 23:02:32.843931   17313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1107 23:02:32.843942   17313 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1107 23:02:32.957488   17313 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb75713b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1107 23:02:32.957550   17313 main.go:141] libmachine: found compatible host: buildroot
	I1107 23:02:32.957558   17313 main.go:141] libmachine: Provisioning with buildroot...
	I1107 23:02:32.957568   17313 main.go:141] libmachine: (addons-245409) Calling .GetMachineName
	I1107 23:02:32.957798   17313 buildroot.go:166] provisioning hostname "addons-245409"
	I1107 23:02:32.957819   17313 main.go:141] libmachine: (addons-245409) Calling .GetMachineName
	I1107 23:02:32.957989   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:32.960219   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.960599   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:32.960637   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:32.960787   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:32.960967   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:32.961136   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:32.961297   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:32.961449   17313 main.go:141] libmachine: Using SSH client type: native
	I1107 23:02:32.961784   17313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1107 23:02:32.961802   17313 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-245409 && echo "addons-245409" | sudo tee /etc/hostname
	I1107 23:02:33.088419   17313 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-245409
	
	I1107 23:02:33.088448   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:33.091183   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.091579   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:33.091602   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.091752   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:33.091908   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:33.092075   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:33.092191   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:33.092330   17313 main.go:141] libmachine: Using SSH client type: native
	I1107 23:02:33.092653   17313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1107 23:02:33.092671   17313 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-245409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-245409/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-245409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:02:33.212860   17313 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:02:33.212888   17313 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1107 23:02:33.212920   17313 buildroot.go:174] setting up certificates
	I1107 23:02:33.212934   17313 provision.go:83] configureAuth start
	I1107 23:02:33.212948   17313 main.go:141] libmachine: (addons-245409) Calling .GetMachineName
	I1107 23:02:33.213212   17313 main.go:141] libmachine: (addons-245409) Calling .GetIP
	I1107 23:02:33.215690   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.216031   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:33.216065   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.216300   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:33.218649   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.218968   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:33.218996   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.219125   17313 provision.go:138] copyHostCerts
	I1107 23:02:33.219188   17313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1107 23:02:33.219321   17313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1107 23:02:33.219391   17313 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1107 23:02:33.219467   17313 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.addons-245409 san=[192.168.39.205 192.168.39.205 localhost 127.0.0.1 minikube addons-245409]
	I1107 23:02:33.385164   17313 provision.go:172] copyRemoteCerts
	I1107 23:02:33.385210   17313 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:02:33.385230   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:33.387601   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.387858   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:33.387891   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.388025   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:33.388219   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:33.388337   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:33.388475   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:02:33.473404   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:02:33.495743   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 23:02:33.517642   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:02:33.542222   17313 provision.go:86] duration metric: configureAuth took 329.274307ms
	I1107 23:02:33.542251   17313 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:02:33.542439   17313 config.go:182] Loaded profile config "addons-245409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:02:33.542515   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:33.545067   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.545378   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:33.545419   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.545580   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:33.545766   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:33.545923   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:33.546062   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:33.546262   17313 main.go:141] libmachine: Using SSH client type: native
	I1107 23:02:33.546723   17313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1107 23:02:33.546748   17313 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:02:33.837665   17313 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:02:33.837691   17313 main.go:141] libmachine: Checking connection to Docker...
	I1107 23:02:33.837737   17313 main.go:141] libmachine: (addons-245409) Calling .GetURL
	I1107 23:02:33.838979   17313 main.go:141] libmachine: (addons-245409) DBG | Using libvirt version 6000000
	I1107 23:02:33.841258   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.841683   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:33.841715   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.841872   17313 main.go:141] libmachine: Docker is up and running!
	I1107 23:02:33.841885   17313 main.go:141] libmachine: Reticulating splines...
	I1107 23:02:33.841892   17313 client.go:171] LocalClient.Create took 23.685820014s
	I1107 23:02:33.841909   17313 start.go:167] duration metric: libmachine.API.Create for "addons-245409" took 23.685876881s
	I1107 23:02:33.841924   17313 start.go:300] post-start starting for "addons-245409" (driver="kvm2")
	I1107 23:02:33.841933   17313 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:02:33.841948   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:02:33.842173   17313 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:02:33.842189   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:33.844208   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.844532   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:33.844560   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.844736   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:33.844956   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:33.845097   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:33.845254   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:02:33.930582   17313 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:02:33.934621   17313 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:02:33.934643   17313 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1107 23:02:33.934708   17313 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1107 23:02:33.934737   17313 start.go:303] post-start completed in 92.807674ms
	I1107 23:02:33.934771   17313 main.go:141] libmachine: (addons-245409) Calling .GetConfigRaw
	I1107 23:02:33.935287   17313 main.go:141] libmachine: (addons-245409) Calling .GetIP
	I1107 23:02:33.937562   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.937924   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:33.937967   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.938139   17313 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/config.json ...
	I1107 23:02:33.938309   17313 start.go:128] duration metric: createHost completed in 23.798490585s
	I1107 23:02:33.938330   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:33.940363   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.940660   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:33.940697   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:33.940826   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:33.940993   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:33.941154   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:33.941308   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:33.941487   17313 main.go:141] libmachine: Using SSH client type: native
	I1107 23:02:33.941797   17313 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I1107 23:02:33.941808   17313 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:02:34.053349   17313 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699398154.031829668
	
	I1107 23:02:34.053381   17313 fix.go:206] guest clock: 1699398154.031829668
	I1107 23:02:34.053392   17313 fix.go:219] Guest: 2023-11-07 23:02:34.031829668 +0000 UTC Remote: 2023-11-07 23:02:33.938320685 +0000 UTC m=+23.907365600 (delta=93.508983ms)
	I1107 23:02:34.053415   17313 fix.go:190] guest clock delta is within tolerance: 93.508983ms
	I1107 23:02:34.053420   17313 start.go:83] releasing machines lock for "addons-245409", held for 23.913683339s
	I1107 23:02:34.053439   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:02:34.053682   17313 main.go:141] libmachine: (addons-245409) Calling .GetIP
	I1107 23:02:34.056060   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:34.056405   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:34.056431   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:34.056589   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:02:34.057079   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:02:34.057278   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:02:34.057400   17313 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:02:34.057442   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:34.057496   17313 ssh_runner.go:195] Run: cat /version.json
	I1107 23:02:34.057521   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:02:34.060013   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:34.060110   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:34.060321   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:34.060355   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:34.060392   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:34.060409   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:34.060445   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:34.060702   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:34.060717   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:02:34.060888   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:02:34.060890   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:34.061114   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:02:34.061127   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:02:34.061257   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:02:34.141179   17313 ssh_runner.go:195] Run: systemctl --version
	I1107 23:02:34.170755   17313 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:02:34.324493   17313 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1107 23:02:34.332289   17313 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:02:34.332364   17313 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:02:34.345923   17313 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:02:34.345942   17313 start.go:472] detecting cgroup driver to use...
	I1107 23:02:34.345993   17313 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:02:34.358872   17313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:02:34.369887   17313 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:02:34.369925   17313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:02:34.381314   17313 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:02:34.392509   17313 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:02:34.497629   17313 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:02:34.606517   17313 docker.go:219] disabling docker service ...
	I1107 23:02:34.606572   17313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:02:34.619740   17313 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:02:34.631698   17313 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:02:34.742949   17313 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:02:34.857336   17313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:02:34.870164   17313 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:02:34.886279   17313 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:02:34.886340   17313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:02:34.895649   17313 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:02:34.895700   17313 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:02:34.905449   17313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:02:34.915016   17313 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:02:34.924322   17313 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:02:34.934409   17313 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:02:34.943009   17313 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1107 23:02:34.943045   17313 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1107 23:02:34.955979   17313 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:02:34.964584   17313 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:02:35.076631   17313 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:02:35.243499   17313 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:02:35.243581   17313 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:02:35.248343   17313 start.go:540] Will wait 60s for crictl version
	I1107 23:02:35.248404   17313 ssh_runner.go:195] Run: which crictl
	I1107 23:02:35.252197   17313 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:02:35.288512   17313 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1107 23:02:35.288623   17313 ssh_runner.go:195] Run: crio --version
	I1107 23:02:35.337768   17313 ssh_runner.go:195] Run: crio --version
	I1107 23:02:35.381412   17313 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1107 23:02:35.382791   17313 main.go:141] libmachine: (addons-245409) Calling .GetIP
	I1107 23:02:35.385294   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:35.385633   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:02:35.385686   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:02:35.385807   17313 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:02:35.389629   17313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:02:35.401006   17313 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:02:35.401052   17313 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:02:35.442237   17313 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1107 23:02:35.442291   17313 ssh_runner.go:195] Run: which lz4
	I1107 23:02:35.446106   17313 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 23:02:35.449999   17313 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:02:35.450024   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1107 23:02:37.037628   17313 crio.go:444] Took 1.591542 seconds to copy over tarball
	I1107 23:02:37.037682   17313 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 23:02:39.965210   17313 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.927507284s)
	I1107 23:02:39.965232   17313 crio.go:451] Took 2.927584 seconds to extract the tarball
	I1107 23:02:39.965240   17313 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 23:02:40.006673   17313 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:02:40.076736   17313 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:02:40.087311   17313 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:02:40.087402   17313 ssh_runner.go:195] Run: crio config
	I1107 23:02:40.158275   17313 cni.go:84] Creating CNI manager for ""
	I1107 23:02:40.158297   17313 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:02:40.158314   17313 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:02:40.158330   17313 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-245409 NodeName:addons-245409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:02:40.158448   17313 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-245409"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:02:40.158523   17313 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-245409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-245409 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:02:40.158569   17313 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:02:40.169290   17313 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:02:40.169358   17313 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:02:40.179919   17313 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1107 23:02:40.195956   17313 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:02:40.211971   17313 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1107 23:02:40.228348   17313 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I1107 23:02:40.231941   17313 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:02:40.244184   17313 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409 for IP: 192.168.39.205
	I1107 23:02:40.244220   17313 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:40.244345   17313 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1107 23:02:40.708072   17313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt ...
	I1107 23:02:40.708101   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt: {Name:mkd82f233be88ec5f25a319cbe494c6c6c3ba28f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:40.708270   17313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key ...
	I1107 23:02:40.708289   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key: {Name:mk7469953544571ae0ee38ee24809cec3fa2040f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:40.708393   17313 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1107 23:02:40.853139   17313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt ...
	I1107 23:02:40.853175   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt: {Name:mke6670ece2c044b8e172657b484d2c5af8d47e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:40.853358   17313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key ...
	I1107 23:02:40.853374   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key: {Name:mk20aeb02391a46cd990610d765abc1993fd9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:40.853523   17313 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.key
	I1107 23:02:40.853535   17313 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt with IP's: []
	I1107 23:02:40.960424   17313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt ...
	I1107 23:02:40.960458   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: {Name:mkbd621ee6d5a2fad4d358bdc22cb5ffde3a7874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:40.960623   17313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.key ...
	I1107 23:02:40.960634   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.key: {Name:mkb9c83f822f3d7c15866c2b101ecb33c0efe909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:40.960702   17313 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.key.358d92cb
	I1107 23:02:40.960718   17313 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.crt.358d92cb with IP's: [192.168.39.205 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:02:41.109851   17313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.crt.358d92cb ...
	I1107 23:02:41.109878   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.crt.358d92cb: {Name:mkad1c405dd9e4cc0692748bfe4e9390cea36346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:41.110013   17313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.key.358d92cb ...
	I1107 23:02:41.110025   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.key.358d92cb: {Name:mkdca2d928fb2137b2a5299cf725fc531e48ea32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:41.110085   17313 certs.go:337] copying /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.crt.358d92cb -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.crt
	I1107 23:02:41.110166   17313 certs.go:341] copying /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.key.358d92cb -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.key
	I1107 23:02:41.110216   17313 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/proxy-client.key
	I1107 23:02:41.110232   17313 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/proxy-client.crt with IP's: []
	I1107 23:02:41.207355   17313 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/proxy-client.crt ...
	I1107 23:02:41.207379   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/proxy-client.crt: {Name:mk7369cb1510171c24f0ac65ead19a251cbecb66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:41.207507   17313 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/proxy-client.key ...
	I1107 23:02:41.207517   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/proxy-client.key: {Name:mk15007fa04371f505585f921919e95b63352503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:41.207672   17313 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:02:41.207705   17313 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:02:41.207730   17313 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:02:41.207755   17313 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1107 23:02:41.208252   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:02:41.232073   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 23:02:41.254619   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:02:41.276724   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 23:02:41.299212   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:02:41.319952   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:02:41.341752   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:02:41.363549   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 23:02:41.385365   17313 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:02:41.406977   17313 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:02:41.422324   17313 ssh_runner.go:195] Run: openssl version
	I1107 23:02:41.427357   17313 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:02:41.436214   17313 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:02:41.440452   17313 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:02:41.440494   17313 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:02:41.446303   17313 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:02:41.455058   17313 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:02:41.458853   17313 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:02:41.458894   17313 kubeadm.go:404] StartCluster: {Name:addons-245409 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-245409 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:02:41.458958   17313 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:02:41.459007   17313 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:02:41.502495   17313 cri.go:89] found id: ""
	I1107 23:02:41.502573   17313 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:02:41.510956   17313 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:02:41.520827   17313 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:02:41.529937   17313 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:02:41.529983   17313 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1107 23:02:41.580214   17313 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1107 23:02:41.580274   17313 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:02:41.711453   17313 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:02:41.711573   17313 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:02:41.711704   17313 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:02:41.935484   17313 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:02:42.001707   17313 out.go:204]   - Generating certificates and keys ...
	I1107 23:02:42.001844   17313 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:02:42.001928   17313 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:02:42.020744   17313 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:02:42.151741   17313 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:02:42.337530   17313 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:02:42.592285   17313 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:02:42.797600   17313 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:02:42.798047   17313 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-245409 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I1107 23:02:42.869940   17313 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:02:42.870222   17313 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-245409 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I1107 23:02:43.012719   17313 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:02:43.247017   17313 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:02:43.414762   17313 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:02:43.415088   17313 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:02:43.518359   17313 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:02:43.613374   17313 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:02:43.718759   17313 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:02:43.872360   17313 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:02:43.873013   17313 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:02:43.875269   17313 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:02:43.877186   17313 out.go:204]   - Booting up control plane ...
	I1107 23:02:43.877335   17313 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:02:43.877439   17313 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:02:43.877529   17313 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:02:43.893288   17313 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:02:43.894194   17313 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:02:43.894263   17313 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:02:44.019424   17313 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:02:52.020125   17313 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003601 seconds
	I1107 23:02:52.020266   17313 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:02:52.047769   17313 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:02:52.577183   17313 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:02:52.577383   17313 kubeadm.go:322] [mark-control-plane] Marking the node addons-245409 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:02:53.093558   17313 kubeadm.go:322] [bootstrap-token] Using token: luxxg3.y8ry5wxv4g2xe1n2
	I1107 23:02:53.095111   17313 out.go:204]   - Configuring RBAC rules ...
	I1107 23:02:53.095245   17313 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:02:53.100637   17313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:02:53.112634   17313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:02:53.116608   17313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:02:53.121069   17313 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:02:53.129507   17313 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:02:53.146975   17313 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:02:53.433453   17313 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:02:53.509652   17313 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:02:53.510726   17313 kubeadm.go:322] 
	I1107 23:02:53.510826   17313 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:02:53.510849   17313 kubeadm.go:322] 
	I1107 23:02:53.510948   17313 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:02:53.510960   17313 kubeadm.go:322] 
	I1107 23:02:53.510991   17313 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:02:53.511108   17313 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:02:53.511181   17313 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:02:53.511192   17313 kubeadm.go:322] 
	I1107 23:02:53.511254   17313 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1107 23:02:53.511268   17313 kubeadm.go:322] 
	I1107 23:02:53.511353   17313 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:02:53.511365   17313 kubeadm.go:322] 
	I1107 23:02:53.511426   17313 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:02:53.511518   17313 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:02:53.511584   17313 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:02:53.511591   17313 kubeadm.go:322] 
	I1107 23:02:53.511664   17313 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:02:53.511742   17313 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:02:53.511757   17313 kubeadm.go:322] 
	I1107 23:02:53.511871   17313 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token luxxg3.y8ry5wxv4g2xe1n2 \
	I1107 23:02:53.512024   17313 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1107 23:02:53.512076   17313 kubeadm.go:322] 	--control-plane 
	I1107 23:02:53.512087   17313 kubeadm.go:322] 
	I1107 23:02:53.512302   17313 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:02:53.512322   17313 kubeadm.go:322] 
	I1107 23:02:53.512440   17313 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token luxxg3.y8ry5wxv4g2xe1n2 \
	I1107 23:02:53.512574   17313 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1107 23:02:53.513001   17313 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:02:53.513049   17313 cni.go:84] Creating CNI manager for ""
	I1107 23:02:53.513067   17313 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:02:53.515213   17313 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1107 23:02:53.516899   17313 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1107 23:02:53.537184   17313 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1107 23:02:53.585192   17313 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:02:53.585265   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:53.585270   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=addons-245409 minikube.k8s.io/updated_at=2023_11_07T23_02_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:53.650586   17313 ops.go:34] apiserver oom_adj: -16
	I1107 23:02:53.856883   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:53.958016   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:54.544239   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:55.044274   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:55.544710   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:56.044179   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:56.544353   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:57.043745   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:57.543664   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:58.044679   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:58.543764   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:59.044230   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:59.544478   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:00.044616   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:00.543699   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:01.043923   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:01.543939   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:02.044659   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:02.544368   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:03.044594   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:03.544377   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:04.043865   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:04.544407   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:05.044635   17313 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:03:05.150048   17313 kubeadm.go:1081] duration metric: took 11.564844111s to wait for elevateKubeSystemPrivileges.
	I1107 23:03:05.150074   17313 kubeadm.go:406] StartCluster complete in 23.691182265s
	I1107 23:03:05.150095   17313 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:03:05.150204   17313 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:03:05.150529   17313 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:03:05.150707   17313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:03:05.150770   17313 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1107 23:03:05.150876   17313 addons.go:69] Setting volumesnapshots=true in profile "addons-245409"
	I1107 23:03:05.150885   17313 addons.go:69] Setting ingress-dns=true in profile "addons-245409"
	I1107 23:03:05.150906   17313 addons.go:231] Setting addon ingress-dns=true in "addons-245409"
	I1107 23:03:05.150919   17313 addons.go:69] Setting registry=true in profile "addons-245409"
	I1107 23:03:05.150928   17313 config.go:182] Loaded profile config "addons-245409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:03:05.150935   17313 addons.go:69] Setting inspektor-gadget=true in profile "addons-245409"
	I1107 23:03:05.150957   17313 addons.go:69] Setting storage-provisioner=true in profile "addons-245409"
	I1107 23:03:05.150963   17313 addons.go:69] Setting default-storageclass=true in profile "addons-245409"
	I1107 23:03:05.150980   17313 addons.go:69] Setting ingress=true in profile "addons-245409"
	I1107 23:03:05.150984   17313 addons.go:69] Setting metrics-server=true in profile "addons-245409"
	I1107 23:03:05.150996   17313 addons.go:231] Setting addon ingress=true in "addons-245409"
	I1107 23:03:05.150998   17313 addons.go:69] Setting gcp-auth=true in profile "addons-245409"
	I1107 23:03:05.150999   17313 addons.go:69] Setting cloud-spanner=true in profile "addons-245409"
	I1107 23:03:05.151015   17313 mustload.go:65] Loading cluster: addons-245409
	I1107 23:03:05.151016   17313 addons.go:231] Setting addon cloud-spanner=true in "addons-245409"
	I1107 23:03:05.151020   17313 addons.go:231] Setting addon registry=true in "addons-245409"
	I1107 23:03:05.151033   17313 addons.go:231] Setting addon inspektor-gadget=true in "addons-245409"
	I1107 23:03:05.151043   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.151065   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.151089   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.150973   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.151186   17313 config.go:182] Loaded profile config "addons-245409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:03:05.151274   17313 addons.go:231] Setting addon storage-provisioner=true in "addons-245409"
	I1107 23:03:05.151291   17313 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-245409"
	I1107 23:03:05.151294   17313 addons.go:231] Setting addon metrics-server=true in "addons-245409"
	I1107 23:03:05.151326   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.151340   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.151523   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.150920   17313 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-245409"
	I1107 23:03:05.151553   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.151566   17313 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-245409"
	I1107 23:03:05.151663   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.151065   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.151694   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.151732   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.151737   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.150961   17313 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-245409"
	I1107 23:03:05.151526   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.151793   17313 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-245409"
	I1107 23:03:05.150909   17313 addons.go:231] Setting addon volumesnapshots=true in "addons-245409"
	I1107 23:03:05.151815   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.151073   17313 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-245409"
	I1107 23:03:05.151879   17313 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-245409"
	I1107 23:03:05.150986   17313 addons.go:69] Setting helm-tiller=true in profile "addons-245409"
	I1107 23:03:05.151895   17313 addons.go:231] Setting addon helm-tiller=true in "addons-245409"
	I1107 23:03:05.151910   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.151935   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.151668   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.151988   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.152032   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.152052   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.152118   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.152126   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.152144   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.152185   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.152353   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.152366   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.152389   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.152457   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.152481   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.152507   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.152525   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.152598   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.152877   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.152929   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.153179   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.153210   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.170206   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I1107 23:03:05.170739   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.171406   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.171425   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.171806   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.171883   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
	I1107 23:03:05.172040   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.172221   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.172471   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40789
	I1107 23:03:05.173164   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.173185   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.173249   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.173262   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.173301   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.174017   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.174206   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.174260   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43441
	I1107 23:03:05.175106   17313 addons.go:231] Setting addon default-storageclass=true in "addons-245409"
	I1107 23:03:05.175132   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.175413   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.175460   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.175489   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.175568   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.175581   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.175809   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.175828   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.175938   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.176110   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.176174   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.176652   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.176708   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.177168   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.177555   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.177615   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.178244   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1107 23:03:05.179209   17313 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-245409"
	I1107 23:03:05.179291   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:05.179683   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.179756   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.180108   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.180530   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.180548   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.180891   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.181392   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.181428   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.194129   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I1107 23:03:05.194610   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.194874   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I1107 23:03:05.195039   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.195062   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.195377   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.195485   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.195821   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.195840   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.196030   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.196056   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.196289   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.196841   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.196874   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.197282   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I1107 23:03:05.197697   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.198140   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.198155   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.198498   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.198678   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.199062   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I1107 23:03:05.199961   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.200398   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.200415   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.200744   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.200793   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.203474   17313 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1107 23:03:05.201293   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.204998   17313 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1107 23:03:05.205013   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1107 23:03:05.205032   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.210399   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.210722   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.210745   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.210957   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.211141   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.211286   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.211427   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.214018   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46199
	I1107 23:03:05.214484   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.214964   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.214982   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.215041   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45511
	I1107 23:03:05.215638   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.215698   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.216188   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.216252   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I1107 23:03:05.216267   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.216283   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.216602   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.216619   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.217040   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.217055   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.217122   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.217156   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.217373   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.217868   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.217904   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.218571   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.221099   17313 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:03:05.221747   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I1107 23:03:05.222445   17313 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:03:05.222461   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:03:05.222481   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.224035   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37363
	I1107 23:03:05.226024   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.226454   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.226466   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.226528   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.226749   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.227315   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.227338   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.227709   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.227734   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.227753   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.227755   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.228028   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.228044   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.228090   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.228216   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.228338   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.229281   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.230172   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.232081   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.234238   17313 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1107 23:03:05.235664   17313 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1107 23:03:05.235680   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 23:03:05.235697   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.235296   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I1107 23:03:05.238871   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I1107 23:03:05.238979   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.239011   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.239228   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.239261   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.239427   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.239435   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.239575   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.239715   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.239847   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.239838   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.239860   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.239897   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.239907   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.240208   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.240357   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.240626   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.241147   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.241187   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.242031   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.243950   17313 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1107 23:03:05.245404   17313 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 23:03:05.245421   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1107 23:03:05.245438   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.243156   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I1107 23:03:05.245851   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.246319   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.246344   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.246681   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.247177   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.247206   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.247915   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I1107 23:03:05.248477   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.248496   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I1107 23:03:05.248839   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.249024   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.249045   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.249327   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.249347   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.249436   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37779
	I1107 23:03:05.249621   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46563
	I1107 23:03:05.249870   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.249978   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.250222   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.250269   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.250485   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.250529   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.250726   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.250756   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.251038   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.251054   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.251642   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.251829   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.252980   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.252998   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.253857   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.254082   17313 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:03:05.254094   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:03:05.254110   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.254749   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.254972   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.256743   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.259010   17313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1107 23:03:05.257875   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.258458   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.259211   17313 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-245409" context rescaled to 1 replicas
	I1107 23:03:05.260586   17313 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:03:05.262331   17313 out.go:177] * Verifying Kubernetes components...
	I1107 23:03:05.261152   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.261407   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.261761   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.261967   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.263571   17313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:03:05.263598   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.264731   17313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1107 23:03:05.265943   17313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1107 23:03:05.268905   17313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1107 23:03:05.266721   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I1107 23:03:05.270454   17313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1107 23:03:05.268988   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I1107 23:03:05.265196   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.265277   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.266745   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I1107 23:03:05.265115   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.269273   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I1107 23:03:05.270801   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.273112   17313 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1107 23:03:05.271904   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.272212   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.272231   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.272537   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.272604   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.272677   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.272805   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I1107 23:03:05.272939   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.276185   17313 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1107 23:03:05.276234   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I1107 23:03:05.275154   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.275179   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.275287   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.275300   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.275349   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.275897   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46097
	I1107 23:03:05.274449   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.277906   17313 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1107 23:03:05.279464   17313 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1107 23:03:05.279485   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1107 23:03:05.277925   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.279507   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.277941   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.278010   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.278231   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.278362   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.278446   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.278475   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.279721   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.279835   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.280108   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.280120   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.280149   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.280182   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.280417   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.280461   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.280532   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.280568   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.281117   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.281134   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.281450   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.281530   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:05.281567   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.281574   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:05.281582   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.281768   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.282382   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.282795   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.284393   17313 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1107 23:03:05.283256   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.283530   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.284116   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.284658   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.285770   17313 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1107 23:03:05.285783   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1107 23:03:05.285790   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.285803   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.285816   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.287288   17313 out.go:177]   - Using image docker.io/registry:2.8.3
	I1107 23:03:05.285453   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.285467   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.286466   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.288531   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.289807   17313 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1107 23:03:05.288700   17313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1107 23:03:05.288953   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.289202   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.289206   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.291091   17313 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1107 23:03:05.291117   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.292207   17313 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1107 23:03:05.292347   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.292377   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.293307   17313 out.go:177]   - Using image docker.io/busybox:stable
	I1107 23:03:05.293327   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1107 23:03:05.294386   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.295576   17313 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1107 23:03:05.295591   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1107 23:03:05.295606   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.294580   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.295071   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.297019   17313 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1107 23:03:05.297157   17313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:03:05.298766   17313 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1107 23:03:05.298486   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.297274   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.299219   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.299276   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.300270   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.300290   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.300297   17313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:03:05.300381   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.300411   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.301947   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.300427   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1107 23:03:05.301968   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.299926   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.302030   17313 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 23:03:05.302048   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1107 23:03:05.302067   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.302081   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.302153   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.302275   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.302302   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I1107 23:03:05.302482   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.302632   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.302736   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.303094   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.303112   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.303176   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45453
	I1107 23:03:05.304163   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:05.304259   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.304452   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.304756   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:05.304779   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:05.305097   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:05.305278   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:05.306692   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.308358   17313 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1107 23:03:05.307275   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.307886   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.308160   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:05.308221   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.308521   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.309769   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.309794   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.309818   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.309825   17313 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1107 23:03:05.309844   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1107 23:03:05.309857   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.309861   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.309922   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.310446   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.310476   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.311878   17313 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1107 23:03:05.310575   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.310693   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.313281   17313 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1107 23:03:05.313297   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1107 23:03:05.313313   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:05.313523   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.313745   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.314040   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.314065   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.314435   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.314590   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.314698   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.314785   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:05.316097   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.316407   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:05.316433   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:05.316559   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:05.316894   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:05.317025   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:05.317164   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	W1107 23:03:05.318025   17313 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38988->192.168.39.205:22: read: connection reset by peer
	I1107 23:03:05.318050   17313 retry.go:31] will retry after 169.946829ms: ssh: handshake failed: read tcp 192.168.39.1:38988->192.168.39.205:22: read: connection reset by peer
	I1107 23:03:05.385546   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1107 23:03:05.491860   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:03:05.555755   17313 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:03:05.556507   17313 node_ready.go:35] waiting up to 6m0s for node "addons-245409" to be "Ready" ...
	I1107 23:03:05.557730   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:03:05.593957   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 23:03:05.615563   17313 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1107 23:03:05.615586   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1107 23:03:05.615848   17313 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 23:03:05.615869   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1107 23:03:05.664531   17313 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1107 23:03:05.664562   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1107 23:03:05.680193   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1107 23:03:05.688039   17313 node_ready.go:49] node "addons-245409" has status "Ready":"True"
	I1107 23:03:05.688060   17313 node_ready.go:38] duration metric: took 131.525803ms waiting for node "addons-245409" to be "Ready" ...
	I1107 23:03:05.688069   17313 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:03:05.707886   17313 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1107 23:03:05.707912   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1107 23:03:05.753274   17313 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-245409" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:05.770561   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 23:03:05.777993   17313 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1107 23:03:05.778010   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1107 23:03:05.810022   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1107 23:03:05.873268   17313 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1107 23:03:05.873293   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1107 23:03:05.896522   17313 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1107 23:03:05.896548   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1107 23:03:05.906136   17313 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 23:03:05.906156   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 23:03:05.911166   17313 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1107 23:03:05.911181   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1107 23:03:05.914498   17313 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1107 23:03:05.914518   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1107 23:03:05.940345   17313 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1107 23:03:05.940368   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1107 23:03:05.993629   17313 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1107 23:03:05.993650   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1107 23:03:06.056546   17313 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1107 23:03:06.056565   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1107 23:03:06.073182   17313 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:03:06.073205   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 23:03:06.075195   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1107 23:03:06.132614   17313 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1107 23:03:06.132638   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1107 23:03:06.139752   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1107 23:03:06.183425   17313 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1107 23:03:06.183461   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1107 23:03:06.225353   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:03:06.231922   17313 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1107 23:03:06.231950   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1107 23:03:06.262174   17313 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1107 23:03:06.262196   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1107 23:03:06.336459   17313 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1107 23:03:06.336479   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1107 23:03:06.342744   17313 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1107 23:03:06.342760   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1107 23:03:06.346183   17313 pod_ready.go:92] pod "etcd-addons-245409" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:06.346202   17313 pod_ready.go:81] duration metric: took 592.90634ms waiting for pod "etcd-addons-245409" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:06.346211   17313 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-245409" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:06.380070   17313 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1107 23:03:06.380095   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1107 23:03:06.415401   17313 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1107 23:03:06.415423   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1107 23:03:06.433383   17313 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:03:06.433406   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1107 23:03:06.462102   17313 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1107 23:03:06.462130   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1107 23:03:06.490967   17313 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1107 23:03:06.490986   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1107 23:03:06.524762   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:03:06.552521   17313 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1107 23:03:06.552541   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1107 23:03:06.564993   17313 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1107 23:03:06.565016   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1107 23:03:06.621794   17313 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1107 23:03:06.621812   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1107 23:03:06.631779   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1107 23:03:06.670794   17313 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1107 23:03:06.670811   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1107 23:03:06.715022   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1107 23:03:06.725864   17313 pod_ready.go:92] pod "kube-apiserver-addons-245409" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:06.725883   17313 pod_ready.go:81] duration metric: took 379.665423ms waiting for pod "kube-apiserver-addons-245409" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:06.725892   17313 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-245409" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:06.914027   17313 pod_ready.go:92] pod "kube-controller-manager-addons-245409" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:06.914051   17313 pod_ready.go:81] duration metric: took 188.152999ms waiting for pod "kube-controller-manager-addons-245409" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:06.914061   17313 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-245409" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:06.944829   17313 pod_ready.go:92] pod "kube-scheduler-addons-245409" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:06.944846   17313 pod_ready.go:81] duration metric: took 30.778846ms waiting for pod "kube-scheduler-addons-245409" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:06.944854   17313 pod_ready.go:38] duration metric: took 1.256776062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:03:06.944867   17313 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:03:06.944911   17313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:03:11.073583   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.688000403s)
	I1107 23:03:11.073631   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:11.073640   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:11.073913   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:11.073938   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:11.073950   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:11.073953   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:11.073961   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:11.074196   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:11.074209   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:11.074223   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:11.847991   17313 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1107 23:03:11.848026   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:11.851060   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:11.851575   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:11.851602   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:11.851811   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:11.852061   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:11.852248   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:11.852410   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:12.112169   17313 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1107 23:03:12.160397   17313 addons.go:231] Setting addon gcp-auth=true in "addons-245409"
	I1107 23:03:12.160454   17313 host.go:66] Checking if "addons-245409" exists ...
	I1107 23:03:12.160801   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:12.160840   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:12.175217   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
	I1107 23:03:12.176123   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:12.176661   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:12.176681   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:12.177048   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:12.177477   17313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:03:12.177505   17313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:03:12.231773   17313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39739
	I1107 23:03:12.232207   17313 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:03:12.232721   17313 main.go:141] libmachine: Using API Version  1
	I1107 23:03:12.232751   17313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:03:12.233103   17313 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:03:12.233276   17313 main.go:141] libmachine: (addons-245409) Calling .GetState
	I1107 23:03:12.234802   17313 main.go:141] libmachine: (addons-245409) Calling .DriverName
	I1107 23:03:12.235035   17313 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1107 23:03:12.235058   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHHostname
	I1107 23:03:12.237816   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:12.238233   17313 main.go:141] libmachine: (addons-245409) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:3b:12", ip: ""} in network mk-addons-245409: {Iface:virbr1 ExpiryTime:2023-11-08 00:02:26 +0000 UTC Type:0 Mac:52:54:00:69:3b:12 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-245409 Clientid:01:52:54:00:69:3b:12}
	I1107 23:03:12.238256   17313 main.go:141] libmachine: (addons-245409) DBG | domain addons-245409 has defined IP address 192.168.39.205 and MAC address 52:54:00:69:3b:12 in network mk-addons-245409
	I1107 23:03:12.238434   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHPort
	I1107 23:03:12.238604   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHKeyPath
	I1107 23:03:12.238799   17313 main.go:141] libmachine: (addons-245409) Calling .GetSSHUsername
	I1107 23:03:12.238939   17313 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/addons-245409/id_rsa Username:docker}
	I1107 23:03:12.250564   17313 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.694772953s)
	I1107 23:03:12.250593   17313 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1107 23:03:12.250621   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.6928654s)
	I1107 23:03:12.250657   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.250676   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.250692   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.758801216s)
	I1107 23:03:12.250664   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.656676612s)
	I1107 23:03:12.250726   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.250740   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.250740   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.250802   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.250982   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:12.250991   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.250993   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:12.251004   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:12.251014   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.251013   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:12.251021   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.251026   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.251030   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:12.251050   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.251059   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.251237   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.251257   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:12.251300   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.251311   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:12.251320   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.251330   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:12.251340   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.251352   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.251553   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.251570   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:12.397338   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.717113171s)
	I1107 23:03:12.397379   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.397395   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.397624   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:12.397636   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.397651   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:12.397668   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.397677   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.397907   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.397922   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:12.541706   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.541734   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.542057   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.542080   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:12.542079   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:12.706878   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:12.706902   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:12.707204   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:12.707222   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:12.707227   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.850909   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.080311625s)
	I1107 23:03:13.850951   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.850964   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.850979   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.711203862s)
	I1107 23:03:13.850909   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.040860044s)
	I1107 23:03:13.850949   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.775730207s)
	I1107 23:03:13.851026   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.851039   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.851002   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.851082   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.851090   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.625709399s)
	I1107 23:03:13.851119   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.851147   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.851158   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.851160   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.851203   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.326408684s)
	I1107 23:03:13.851247   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.851261   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.219457992s)
	I1107 23:03:13.851279   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.851280   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.851290   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	W1107 23:03:13.851262   17313 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1107 23:03:13.851301   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.851307   17313 retry.go:31] will retry after 270.538344ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1107 23:03:13.851316   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.851291   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.851417   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.851432   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.851445   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.851454   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.851464   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.851882   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.851891   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.851912   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.851915   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.851914   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.851924   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.851928   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.851929   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.851934   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.851941   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.851943   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.851950   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.851953   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.851962   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.851990   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.851999   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.852007   17313 addons.go:467] Verifying addon ingress=true in "addons-245409"
	I1107 23:03:13.853824   17313 out.go:177] * Verifying ingress addon...
	I1107 23:03:13.852169   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.852189   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.852220   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.852243   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.852263   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.852285   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.852305   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.855190   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.855206   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.855207   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.855222   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.855260   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.855493   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.855508   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.855520   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.855540   17313 addons.go:467] Verifying addon registry=true in "addons-245409"
	I1107 23:03:13.856902   17313 out.go:177] * Verifying registry addon...
	I1107 23:03:13.856098   17313 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1107 23:03:13.856132   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.858594   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:13.858606   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:13.858850   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:13.858868   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:13.858872   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:13.858877   17313 addons.go:467] Verifying addon metrics-server=true in "addons-245409"
	I1107 23:03:13.859104   17313 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1107 23:03:13.873427   17313 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1107 23:03:13.873443   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:13.884916   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:13.886533   17313 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1107 23:03:13.886547   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:13.892413   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:14.122034   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:03:14.432332   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:14.433202   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:14.586725   17313 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.641787155s)
	I1107 23:03:14.586746   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.871671173s)
	I1107 23:03:14.586764   17313 api_server.go:72] duration metric: took 9.326145342s to wait for apiserver process to appear ...
	I1107 23:03:14.586772   17313 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:03:14.586799   17313 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I1107 23:03:14.586810   17313 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.35175658s)
	I1107 23:03:14.586795   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:14.586845   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:14.588534   17313 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:03:14.587112   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:14.587197   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:14.590068   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:14.591442   17313 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1107 23:03:14.590093   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:14.591473   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:14.592833   17313 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1107 23:03:14.592907   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1107 23:03:14.593129   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:14.593147   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:14.593158   17313 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-245409"
	I1107 23:03:14.594416   17313 out.go:177] * Verifying csi-hostpath-driver addon...
	I1107 23:03:14.596192   17313 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1107 23:03:14.604596   17313 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I1107 23:03:14.632563   17313 api_server.go:141] control plane version: v1.28.3
	I1107 23:03:14.632587   17313 api_server.go:131] duration metric: took 45.809237ms to wait for apiserver health ...
	I1107 23:03:14.632594   17313 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:03:14.637659   17313 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1107 23:03:14.637678   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:14.675725   17313 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1107 23:03:14.675746   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1107 23:03:14.699012   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:14.714203   17313 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1107 23:03:14.714223   17313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1107 23:03:14.742593   17313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1107 23:03:14.783153   17313 system_pods.go:59] 18 kube-system pods found
	I1107 23:03:14.783191   17313 system_pods.go:61] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:14.783203   17313 system_pods.go:61] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:14.783210   17313 system_pods.go:61] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending
	I1107 23:03:14.783219   17313 system_pods.go:61] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:14.783230   17313 system_pods.go:61] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:14.783242   17313 system_pods.go:61] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:14.783251   17313 system_pods.go:61] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:14.783262   17313 system_pods.go:61] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:14.783282   17313 system_pods.go:61] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:14.783297   17313 system_pods.go:61] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:14.783310   17313 system_pods.go:61] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:14.783323   17313 system_pods.go:61] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:14.783336   17313 system_pods.go:61] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:14.783347   17313 system_pods.go:61] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:14.783360   17313 system_pods.go:61] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:14.783386   17313 system_pods.go:61] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:14.783405   17313 system_pods.go:61] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:14.783417   17313 system_pods.go:61] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:14.783428   17313 system_pods.go:74] duration metric: took 150.827932ms to wait for pod list to return data ...
	I1107 23:03:14.783440   17313 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:03:14.792128   17313 default_sa.go:45] found service account: "default"
	I1107 23:03:14.792148   17313 default_sa.go:55] duration metric: took 8.699085ms for default service account to be created ...
	I1107 23:03:14.792155   17313 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:03:14.823672   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:14.823700   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:14.823707   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:14.823716   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:14.823725   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:14.823730   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:14.823734   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:14.823738   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:14.823749   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:14.823760   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:14.823766   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:14.823772   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:14.823795   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:14.823809   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:14.823814   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:14.823819   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:14.823828   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:14.823839   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:14.823850   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:14.823879   17313 retry.go:31] will retry after 249.90989ms: missing components: kube-proxy
	I1107 23:03:14.899641   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:14.934277   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:15.103646   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:15.103671   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:15.103681   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:15.103692   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:15.103698   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:15.103703   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:15.103708   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:15.103713   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:15.103718   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:15.103727   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:15.103731   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:15.103736   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:15.103744   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:15.103750   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:15.103757   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:15.103763   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:15.103774   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:15.103782   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:15.103790   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:15.103802   17313 retry.go:31] will retry after 377.253473ms: missing components: kube-proxy
	I1107 23:03:15.207250   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:15.479995   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:15.480956   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:15.531861   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:15.531888   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:15.531896   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:15.531908   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:15.531917   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:15.531930   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:15.531941   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:15.531951   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:15.531962   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:15.531971   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:15.531978   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:15.531984   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:15.531994   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:15.532002   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:15.532011   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:15.532023   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:15.532037   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:15.532051   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:15.532065   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:15.532085   17313 retry.go:31] will retry after 359.478872ms: missing components: kube-proxy
	I1107 23:03:15.706439   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:15.897348   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:15.908327   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:15.924469   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:15.924497   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:15.924505   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:15.924513   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:15.924527   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:15.924543   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:15.924550   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:15.924560   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:15.924574   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:15.924583   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:15.924588   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:15.924594   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:15.924600   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:15.924608   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:15.924614   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:15.924624   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:15.924636   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:15.924649   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:15.924663   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:15.924681   17313 retry.go:31] will retry after 426.80734ms: missing components: kube-proxy
	I1107 23:03:16.205387   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:16.387416   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:16.387442   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:16.387451   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:16.387459   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:16.387465   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:16.387471   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:16.387476   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:16.387482   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:16.387487   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:16.387495   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:16.387499   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:16.387509   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:16.387515   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:16.387524   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:16.387530   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:16.387544   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:16.387550   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:16.387556   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:16.387566   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:16.387581   17313 retry.go:31] will retry after 690.495837ms: missing components: kube-proxy
	I1107 23:03:16.392607   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:16.415377   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:16.746336   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:16.762990   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.640900025s)
	I1107 23:03:16.763052   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:16.763068   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:16.763349   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:16.763390   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:16.763404   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:16.763416   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:16.763427   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:16.763622   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:16.763633   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:16.763637   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:16.897619   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:16.910420   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:17.080486   17313 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.337850602s)
	I1107 23:03:17.080530   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:17.080553   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:17.080859   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:17.080885   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:17.080888   17313 main.go:141] libmachine: (addons-245409) DBG | Closing plugin on server side
	I1107 23:03:17.080901   17313 main.go:141] libmachine: Making call to close driver server
	I1107 23:03:17.080912   17313 main.go:141] libmachine: (addons-245409) Calling .Close
	I1107 23:03:17.081135   17313 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:03:17.081152   17313 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:03:17.082652   17313 addons.go:467] Verifying addon gcp-auth=true in "addons-245409"
	I1107 23:03:17.084575   17313 out.go:177] * Verifying gcp-auth addon...
	I1107 23:03:17.086444   17313 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1107 23:03:17.102760   17313 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1107 23:03:17.102780   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:17.107874   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:17.107901   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:17.107909   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:17.107917   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:17.107924   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:17.107929   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:17.107934   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:17.107938   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:17.107947   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:17.107953   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:17.107965   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:17.107970   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:17.107976   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:17.107984   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:17.107990   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:17.107999   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:17.108008   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:17.108015   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:17.108025   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:17.108039   17313 retry.go:31] will retry after 941.539134ms: missing components: kube-proxy
	I1107 23:03:17.130668   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:17.207021   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:17.398861   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:17.407689   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:17.640643   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:17.708997   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:17.891620   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:17.897296   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:18.059261   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:18.059295   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:18.059303   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:18.059311   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:18.059318   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:18.059323   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:18.059328   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:18.059332   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:18.059340   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:18.059346   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:18.059355   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:18.059360   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:18.059374   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:18.059379   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:18.059388   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:18.059396   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:18.059416   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:18.059427   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:18.059436   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:18.059452   17313 retry.go:31] will retry after 742.891275ms: missing components: kube-proxy
	I1107 23:03:18.143369   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:18.206499   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:18.395408   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:18.410619   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:18.634827   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:18.705287   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:18.818296   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:18.818324   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:18.818332   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:18.818341   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:18.818347   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:18.818352   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:18.818356   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:18.818361   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:18.818367   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:18.818372   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:18.818376   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:18.818382   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:18.818391   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:18.818397   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:18.818403   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:18.818409   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:18.818416   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:18.818423   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:18.818429   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:18.818445   17313 retry.go:31] will retry after 1.396188732s: missing components: kube-proxy
	I1107 23:03:18.890238   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:18.900513   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:19.136741   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:19.204759   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:19.391825   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:19.397232   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:19.634533   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:19.706005   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:19.889744   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:19.897576   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:20.134617   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:20.204523   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:20.223147   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:20.223175   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:20.223187   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:20.223197   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:20.223207   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:20.223214   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:20.223222   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:20.223229   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:20.223242   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:20.223253   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:03:20.223261   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:20.223271   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:20.223290   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:20.223301   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:20.223312   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:20.223323   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:20.223338   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:20.223348   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:20.223364   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:20.223381   17313 retry.go:31] will retry after 1.46729285s: missing components: kube-proxy
	I1107 23:03:20.398580   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:20.399752   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:20.635271   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:20.704224   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:20.890090   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:20.900132   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:21.134607   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:21.204698   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:21.389326   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:21.396885   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:21.635757   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:21.700668   17313 system_pods.go:86] 18 kube-system pods found
	I1107 23:03:21.700700   17313 system_pods.go:89] "coredns-5dd5756b68-kqbfn" [6bca5551-b4bf-4b0c-b10d-497aef1406b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:03:21.700714   17313 system_pods.go:89] "csi-hostpath-attacher-0" [dd38dd33-9468-432c-ad1e-aba7d8c37bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1107 23:03:21.700725   17313 system_pods.go:89] "csi-hostpath-resizer-0" [4bd63d18-a27b-49ed-8019-47b56a75e07b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1107 23:03:21.700734   17313 system_pods.go:89] "csi-hostpathplugin-bvxsw" [6dab6e84-2182-4830-a969-aea7edbaa37d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:03:21.700741   17313 system_pods.go:89] "etcd-addons-245409" [efcc9f5b-187b-45dd-8cb9-4011c085f612] Running
	I1107 23:03:21.700749   17313 system_pods.go:89] "kube-apiserver-addons-245409" [9f8ad858-4712-45ed-bbf3-f6f1ef80853f] Running
	I1107 23:03:21.700756   17313 system_pods.go:89] "kube-controller-manager-addons-245409" [4bfc8f3b-b73f-4e28-ba82-ab73b38b4f9f] Running
	I1107 23:03:21.700767   17313 system_pods.go:89] "kube-ingress-dns-minikube" [9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1107 23:03:21.700775   17313 system_pods.go:89] "kube-proxy-trzdn" [25df10a0-64b6-412c-a77e-9cd904eba85a] Running
	I1107 23:03:21.700784   17313 system_pods.go:89] "kube-scheduler-addons-245409" [ef55e648-2cfc-4dd1-9168-f501b8c459e3] Running
	I1107 23:03:21.700802   17313 system_pods.go:89] "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1107 23:03:21.700827   17313 system_pods.go:89] "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1107 23:03:21.700841   17313 system_pods.go:89] "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1107 23:03:21.700866   17313 system_pods.go:89] "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1107 23:03:21.700876   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-g859r" [c18af89e-33d2-4643-9630-3735ee04005f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:21.700891   17313 system_pods.go:89] "snapshot-controller-58dbcc7b99-st2dq" [94e9aaa8-aba0-4b9b-accd-640cb6b35889] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1107 23:03:21.700901   17313 system_pods.go:89] "storage-provisioner" [6d52d7cf-93a5-4966-8fac-41e9b9cf2556] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:03:21.700909   17313 system_pods.go:89] "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1107 23:03:21.700919   17313 system_pods.go:126] duration metric: took 6.908757038s to wait for k8s-apps to be running ...
	I1107 23:03:21.700930   17313 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:03:21.700988   17313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:03:21.705187   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:21.724639   17313 system_svc.go:56] duration metric: took 23.700704ms WaitForService to wait for kubelet.
	I1107 23:03:21.724670   17313 kubeadm.go:581] duration metric: took 16.464053173s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:03:21.724695   17313 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:03:21.727785   17313 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:03:21.727829   17313 node_conditions.go:123] node cpu capacity is 2
	I1107 23:03:21.727843   17313 node_conditions.go:105] duration metric: took 3.139274ms to run NodePressure ...
	I1107 23:03:21.727859   17313 start.go:228] waiting for startup goroutines ...
	I1107 23:03:21.891121   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:21.902078   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:22.134375   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:22.205254   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:22.390188   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:22.397477   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:22.640177   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:22.705766   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:22.890809   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:22.896722   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:23.135391   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:23.205995   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:23.390180   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:23.396757   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:23.635827   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:23.705232   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:23.892253   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:23.898091   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:24.145493   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:24.205151   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:24.390621   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:24.397260   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:24.635115   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:24.706109   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:24.891096   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:24.896852   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:25.134841   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:25.207953   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:25.390974   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:25.398027   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:25.633775   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:25.705236   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:25.889238   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:25.898341   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:26.134538   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:26.205439   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:26.390012   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:26.400259   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:26.635670   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:26.706194   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:26.891484   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:26.897933   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:27.136244   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:27.209685   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:27.390975   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:27.398369   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:27.645293   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:27.712969   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:27.889906   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:27.897425   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:28.136684   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:28.206608   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:28.390428   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:28.396633   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:28.635303   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:28.705351   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:28.890969   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:28.897565   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:29.135585   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:29.205294   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:29.390011   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:29.397376   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:29.634738   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:29.706011   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:29.889243   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:29.898712   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:30.134466   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:30.212604   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:30.389981   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:30.398710   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:30.635710   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:30.705401   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:30.891072   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:30.900379   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:31.138705   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:31.206726   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:31.615258   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:31.618903   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:31.637425   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:31.708429   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:31.901637   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:31.902546   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:32.135213   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:32.217409   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:32.390141   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:32.396608   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:32.649396   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:32.704863   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:32.889575   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:32.897698   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:33.135475   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:33.204897   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:33.389393   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:33.399513   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:33.634366   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:33.704728   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:33.889327   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:33.896975   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:34.134480   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:34.205381   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:34.391039   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:34.398233   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:34.642922   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:34.706583   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:34.890611   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:34.899810   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:35.135186   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:35.206283   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:35.390799   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:35.398521   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:35.634982   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:35.705750   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:35.890273   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:35.897188   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:36.134689   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:36.205471   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:36.390034   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:36.397632   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:36.634830   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:36.706945   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:36.890201   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:36.896851   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:37.134987   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:37.211171   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:37.390014   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:37.406669   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:37.635252   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:37.710339   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:37.889696   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:37.897195   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:38.134865   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:38.206306   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:38.390028   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:38.397782   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:38.642897   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:38.707269   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:39.193053   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:39.193063   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:39.194379   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:39.208017   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:39.390064   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:39.396784   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:39.635310   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:39.706493   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:39.890136   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:39.898638   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:40.135787   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:40.205626   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:40.390021   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:40.398762   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:40.634178   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:40.705861   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:40.889766   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:40.901172   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:41.134945   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:41.206261   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:41.389847   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:41.397492   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:41.638030   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:41.708033   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:41.889877   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:41.896950   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:42.135163   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:42.206394   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:42.389509   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:42.397143   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:42.634809   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:42.709671   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:42.890225   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:42.900799   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:43.135769   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:43.205401   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:43.390145   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:43.397275   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:43.636011   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:43.709626   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:43.890367   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:43.897852   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:44.134823   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:44.206441   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:44.391827   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:44.397595   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:44.643608   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:44.705756   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:44.889384   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:44.896960   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:45.134876   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:45.204892   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:45.389854   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:45.397396   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:45.634571   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:45.705779   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:45.891653   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:45.897977   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:46.141628   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:46.204616   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:46.389564   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:46.396734   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:46.635971   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:46.710932   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:46.892796   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:46.901042   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:47.449950   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:47.450711   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:47.450881   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:47.453847   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:47.635156   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:47.706694   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:47.891077   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:47.896419   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:48.134846   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:48.205410   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:48.390723   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:48.398220   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:48.634955   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:48.705840   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:48.892082   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:48.897035   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:49.452508   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:49.452990   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:49.453012   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:49.458842   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:49.637187   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:49.705841   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:49.890953   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:49.897255   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:50.141395   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:50.208084   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:50.389705   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:50.397797   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:50.637894   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:50.708248   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:50.890159   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:50.899017   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:51.134287   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:51.207355   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:51.390001   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:51.397759   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:51.644600   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:51.707288   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:51.890088   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:51.897453   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:52.135122   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:52.205615   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:52.389455   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:52.397319   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:52.635083   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:52.705502   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:52.890563   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:52.897384   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:53.134848   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:53.208256   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:53.390624   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:53.397534   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:53.635162   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:53.705811   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:53.889977   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:53.906416   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:54.139584   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:54.205240   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:54.389808   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:54.397513   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:54.635473   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:54.709725   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:54.892444   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:54.908421   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:55.137640   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:55.205700   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:55.390958   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:55.402550   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:55.635163   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:55.708574   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:55.890154   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:55.897302   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:56.134574   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:56.205400   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:56.390711   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:56.398754   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:56.643689   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:56.709485   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:56.890975   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:56.900755   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:57.135228   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:57.205149   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:57.389876   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:57.397809   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:57.635670   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:57.704719   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:57.893877   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:57.897711   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:58.135435   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:58.205299   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:58.390039   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:58.399556   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:58.636180   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:58.707621   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:58.891054   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:58.897495   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:59.134751   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:59.204908   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:59.390882   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:59.398357   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:59.634722   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:59.705399   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:59.892013   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:59.899550   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:00.134802   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:00.205131   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:00.393309   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:00.401046   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:00.634339   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:00.706231   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:00.890303   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:00.897405   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:01.134986   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:01.206327   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:01.390133   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:01.396935   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:01.635636   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:01.705083   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:01.889659   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:01.896959   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:02.135328   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:02.206026   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:02.390018   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:02.397243   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:02.636379   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:02.704646   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:02.890962   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:02.898428   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:03.143892   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:03.212246   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:03.389756   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:03.397609   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:03.635454   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:03.715959   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:03.889451   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:03.896401   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:04.134319   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:04.204552   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:04.391356   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:04.396239   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:04.634712   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:04.704301   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:04.889982   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:04.898163   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:05.135327   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:05.205570   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:05.389841   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:05.397613   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:05.636000   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:05.707977   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:05.890176   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:05.900475   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:06.136082   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:06.207076   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:06.390736   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:06.398428   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:06.639760   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:06.706308   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:06.890412   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:06.897615   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:07.135120   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:07.207054   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:07.390597   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:07.397632   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:07.635148   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:07.710899   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:07.891792   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:07.898288   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:08.135479   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:08.205477   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:08.390398   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:08.397858   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:08.634569   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:08.706251   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:08.889885   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:08.902216   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:09.135763   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:09.206247   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:09.389908   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:09.397965   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:09.635558   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:09.705229   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:09.890972   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:09.899263   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:10.135612   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:10.204632   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:10.390352   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:10.397589   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:10.635308   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:10.716792   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:10.890072   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:10.897190   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:11.134425   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:11.208168   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:11.389972   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:11.397022   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:11.635218   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:11.705978   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:11.890979   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:11.899165   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:12.134886   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:12.205719   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:12.389913   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:12.397630   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:12.635123   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:12.705699   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:12.890318   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:12.896267   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:13.135365   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:13.204871   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:13.389288   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:13.396521   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:13.634846   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:13.705944   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:13.889450   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:13.897153   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:14.286976   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:14.287580   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:14.389912   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:14.397502   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:14.634873   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:14.705015   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:14.889526   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:14.897068   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:15.134842   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:15.214173   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:15.390438   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:15.397113   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:15.634690   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:15.705904   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:15.889589   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:15.900523   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:16.139212   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:16.205922   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:16.390273   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:16.397486   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:16.635348   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:16.713036   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:16.889844   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:16.897755   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:17.135296   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:17.314909   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:17.636025   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:17.636882   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:17.639478   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:17.704977   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:17.902578   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:17.907590   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:18.135164   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:18.214998   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:18.389483   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:18.397114   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:18.634738   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:18.705635   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:18.891118   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:18.896326   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:19.135683   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:19.205328   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:19.392067   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:19.399522   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:19.634948   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:19.710924   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:19.889802   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:19.897954   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:20.134693   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:20.204976   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:20.392273   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:20.398784   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:20.636044   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:20.706690   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:20.890230   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:20.897508   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:21.135345   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:21.207136   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:21.397957   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:21.410742   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:21.635238   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:21.705879   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:21.890587   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:21.899855   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:22.134565   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:22.205404   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:22.390704   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:22.397310   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:22.635878   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:22.717711   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:22.900032   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:22.901252   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:23.134278   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:23.205939   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:23.390193   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:23.397947   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:23.646917   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:23.734681   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:23.890048   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:23.898907   17313 kapi.go:107] duration metric: took 1m10.039799958s to wait for kubernetes.io/minikube-addons=registry ...
	I1107 23:04:24.136102   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:24.206691   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:24.392511   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:24.645132   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:24.711679   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:24.889833   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:25.135733   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:25.209739   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:25.390431   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:25.634049   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:25.706092   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:25.890094   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:26.135382   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:26.205908   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:26.390411   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:26.635641   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:26.706772   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:26.890140   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:27.134513   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:27.205671   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:27.390333   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:27.634802   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:27.705470   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:27.890519   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:28.135110   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:28.208027   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:28.389587   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:28.634488   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:28.705253   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:28.915428   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:29.134403   17313 kapi.go:107] duration metric: took 1m12.047954385s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1107 23:04:29.136238   17313 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-245409 cluster.
	I1107 23:04:29.137716   17313 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1107 23:04:29.139100   17313 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1107 23:04:29.207763   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:29.390312   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:29.704793   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:29.890507   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:30.206013   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:30.392215   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:30.709787   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:30.920654   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:31.205065   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:31.389091   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:31.708335   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:31.890023   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:32.206545   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:32.389396   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:32.710374   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:32.890340   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:33.206724   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:33.389319   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:33.708629   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:33.889411   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:34.205025   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:34.389997   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:34.710125   17313 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:34.892284   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:35.204627   17313 kapi.go:107] duration metric: took 1m20.608433011s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1107 23:04:35.389732   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:35.890506   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:36.391275   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:36.889865   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:37.390911   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:37.891786   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:38.389763   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:38.890248   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:39.390986   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:39.890952   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:40.390519   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:40.890909   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:41.391732   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:41.890520   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:42.390561   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:42.889602   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:43.389754   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:43.890536   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:44.390412   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:44.889580   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:45.390548   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:45.889706   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:46.390287   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:46.890229   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:47.391206   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:47.890870   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:48.390280   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:48.889159   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:49.389469   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:49.891548   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:50.390450   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:50.891438   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:51.390235   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:51.890468   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:52.390133   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:52.890389   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:53.390215   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:53.889470   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:54.390216   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:54.889731   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:55.390652   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:55.889769   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:56.389887   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:56.890321   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:57.390272   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:57.889922   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:58.390876   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:58.890239   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:59.389672   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:59.891061   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:00.389515   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:00.890512   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:01.389713   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:01.890456   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:02.390546   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:02.890204   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:03.389507   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:03.890144   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:04.389656   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:04.890316   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:05.390030   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:05.890844   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:06.390833   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:06.890778   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:07.390462   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:07.890288   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:08.391183   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:08.889742   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:09.390390   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:09.890272   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:10.390396   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:10.890024   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:11.390468   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:11.890313   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:12.390021   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:12.890499   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:13.389966   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:13.890746   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:14.390004   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:14.891308   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:15.390006   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:15.890354   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:16.389940   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:16.891254   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:17.389768   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:17.891203   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:18.391107   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:18.889813   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:19.390613   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:19.892880   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:20.390571   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:20.890609   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:21.389694   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:21.890855   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:22.390423   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:22.890024   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:23.390680   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:23.890420   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:24.389900   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:24.890518   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:25.392010   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:25.891002   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:26.389961   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:26.890068   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:27.391165   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:27.890156   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:28.390291   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:28.891213   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:29.390336   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:29.890448   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:30.393239   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:30.890278   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:31.390141   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:31.890890   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:32.390680   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:32.901961   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:33.390026   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:33.890185   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:34.394847   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:34.890333   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:35.403217   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:35.890282   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:36.391028   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:36.891354   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:37.390539   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:37.890936   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:38.390862   17313 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:05:38.890449   17313 kapi.go:107] duration metric: took 2m25.034350057s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1107 23:05:38.892415   17313 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, storage-provisioner-rancher, helm-tiller, nvidia-device-plugin, inspektor-gadget, metrics-server, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1107 23:05:38.893774   17313 addons.go:502] enable addons completed in 2m33.743001494s: enabled=[cloud-spanner ingress-dns storage-provisioner default-storageclass storage-provisioner-rancher helm-tiller nvidia-device-plugin inspektor-gadget metrics-server volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1107 23:05:38.893816   17313 start.go:233] waiting for cluster config update ...
	I1107 23:05:38.893832   17313 start.go:242] writing updated cluster config ...
	I1107 23:05:38.894137   17313 ssh_runner.go:195] Run: rm -f paused
	I1107 23:05:38.944573   17313 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1107 23:05:38.947200   17313 out.go:177] * Done! kubectl is now configured to use "addons-245409" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-07 23:02:23 UTC, ends at Tue 2023-11-07 23:08:59 UTC. --
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.559875055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=34ab44e1-ef8c-4729-abec-61d65b12e5a2 name=/runtime.v1.RuntimeService/Version
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.562195658Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f484715f-9902-4ef2-ad8f-8dfc8292cfcc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.563985686Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e4de196f-d6f0-44bd-abe0-8d7834798ed5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.564202206Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7d79aad0f83a51bb49d63ea68a88bc675eea5c2ea0947b398896f3d0b5e2fd49,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-6qznx,Uid:f7cde473-7e11-47c0-bedc-6b20f993879a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398529768078889,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-6qznx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7cde473-7e11-47c0-bedc-6b20f993879a,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:08:49.427613918Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:62bbbb60e5277e5fd9818ee9fe358c38fa65849b1e5e9fd3afe25f245d5dc9f4,Metadata:&PodSandboxMetadata{Name:nginx,Uid:5c192263-36d7-41b2-9be7-c4e7a400b6f4,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1699398384165123848,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c192263-36d7-41b2-9be7-c4e7a400b6f4,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:06:23.824294049Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d7316bd007cec9cba2dd8c30e62f67790e81c87cf22911e8e0acf148dbf412cb,Metadata:&PodSandboxMetadata{Name:headlamp-94b766c-9rm52,Uid:a844ff26-24d3-44ff-8137-3b41431422ff,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398354358959589,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-94b766c-9rm52,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: a844ff26-24d3-44ff-8137-3b41431422ff,pod-template-hash: 94b766c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:
05:53.461560994Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e89708924064c3bae9ed45592389a7ed0c7d7ed66b9d1b914d687162e97a812b,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-8zzj7,Uid:6ad001a9-406b-41be-9838-500301f2332f,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398261442243757,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-8zzj7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6ad001a9-406b-41be-9838-500301f2332f,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:03:17.049244168Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3babb12f480615cd1438274db42208b12a7d6b25bf9ea4b52612636efa393e14,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6d52d7cf-93a5-4966-8fac-41e9b9cf2556,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398192968786329,La
bels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d52d7cf-93a5-4966-8fac-41e9b9cf2556,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}
\n,kubernetes.io/config.seen: 2023-11-07T23:03:12.331936524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:611d1628e45eb009cba9e62cf93c15fa71178adfba162c528b63000aaf1f8dfa,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-kqbfn,Uid:6bca5551-b4bf-4b0c-b10d-497aef1406b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398187246362894,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-kqbfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bca5551-b4bf-4b0c-b10d-497aef1406b9,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:03:06.915974447Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ab70e5679c5d3528b53a44cdb0da1e2c5f04958e10e6deacc459cb5bff7c5f8,Metadata:&PodSandboxMetadata{Name:kube-proxy-trzdn,Uid:25df10a0-64b6-412c-a77e-9cd904eba85a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398187
209408065,Labels:map[string]string{controller-revision-hash: dffc744c9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-trzdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25df10a0-64b6-412c-a77e-9cd904eba85a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:03:06.580169268Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c786069480c05b07d3edddee76a15e69cf91dd1b52748592b359b140ea6d5de5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-245409,Uid:9ffdbd6969807fdb0e7422643fcd7839,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398165046013044,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ffdbd6969807fdb0e7422643fcd7839,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver
.advertise-address.endpoint: 192.168.39.205:8443,kubernetes.io/config.hash: 9ffdbd6969807fdb0e7422643fcd7839,kubernetes.io/config.seen: 2023-11-07T23:02:44.456541635Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dcda94fcb9e0824d6a6621f71c76c61cb011dd919ab937a3d9b8780b232ab247,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-245409,Uid:a151f3ad6f5623e92cc7f25996889001,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398165023439988,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a151f3ad6f5623e92cc7f25996889001,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a151f3ad6f5623e92cc7f25996889001,kubernetes.io/config.seen: 2023-11-07T23:02:44.456536766Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5434759d63ea8ad0b40afc40e6cae41d3cda8d
0a0ecc5c10be91980d79ed5fad,Metadata:&PodSandboxMetadata{Name:etcd-addons-245409,Uid:019f4f9b0771998bde2e47be4389857e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398165013277618,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019f4f9b0771998bde2e47be4389857e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.205:2379,kubernetes.io/config.hash: 019f4f9b0771998bde2e47be4389857e,kubernetes.io/config.seen: 2023-11-07T23:02:44.456540556Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:343f4be3c3a45c60afff6f5a0ac9d85896faf6623cf7d7af1c4a7c67ff531ae7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-245409,Uid:7fd3e04db3e429207bcd73816d547519,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699398164974028491,Labels:map[string]string{component: kube-s
cheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd3e04db3e429207bcd73816d547519,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7fd3e04db3e429207bcd73816d547519,kubernetes.io/config.seen: 2023-11-07T23:02:44.456539499Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=e4de196f-d6f0-44bd-abe0-8d7834798ed5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.564921753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=51ea643c-0077-4a1a-94f8-112839c65a0e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.565001245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=51ea643c-0077-4a1a-94f8-112839c65a0e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.565261864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a078da224a568fe0100c5dbe04eccd19c00f2a903f31cd08fe3f8f8f4cbb4da7,PodSandboxId:7d79aad0f83a51bb49d63ea68a88bc675eea5c2ea0947b398896f3d0b5e2fd49,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699398533316217715,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-6qznx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7cde473-7e11-47c0-bedc-6b20f993879a,},Annotations:map[string]string{io.kubernetes.container.hash: 9591d9b3,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19676d3928805f77a8ae3c3f9a397f3740b0c13bf01b611b9e7b912408988416,PodSandboxId:62bbbb60e5277e5fd9818ee9fe358c38fa65849b1e5e9fd3afe25f245d5dc9f4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699398390475412893,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c192263-36d7-41b2-9be7-c4e7a400b6f4,},Annotations:map[string]string{io.kubernet
es.container.hash: 22470a47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daa807d043337be68143b3c6dc307c227584db5ed3ab76b3b80414b2e48800d,PodSandboxId:d7316bd007cec9cba2dd8c30e62f67790e81c87cf22911e8e0acf148dbf412cb,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1699398367122175035,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-9rm52,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid:
a844ff26-24d3-44ff-8137-3b41431422ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2755981e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e365d007bb68b7b8725385337681cc9117c9d9161279e410b505d530678785e2,PodSandboxId:e89708924064c3bae9ed45592389a7ed0c7d7ed66b9d1b914d687162e97a812b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699398268146214579,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-8zzj7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6ad001a9-406b-41be-9838-500301f2332f,},Annotations:map[string]string{io.kubernetes.container.hash: 61ab5aa7,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64caaa8fa3d22f2c7743813ccb518e884d98c714fedc01d3e30daf773d9dd146,PodSandboxId:3babb12f480615cd1438274db42208b12a7d6b25bf9ea4b52612636efa393e14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699398
202480708632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d52d7cf-93a5-4966-8fac-41e9b9cf2556,},Annotations:map[string]string{io.kubernetes.container.hash: d56e158b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60356df05d632ba5e668483f0fe65e510a8bff86b4ce8f76e8652ae38eefc095,PodSandboxId:5ab70e5679c5d3528b53a44cdb0da1e2c5f04958e10e6deacc459cb5bff7c5f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699398199713888196,Labels:map
[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trzdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25df10a0-64b6-412c-a77e-9cd904eba85a,},Annotations:map[string]string{io.kubernetes.container.hash: 8090d184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd956c1e8bd675529941438e9a07306961d3a54d8b9217fadcdf3104b8a6518,PodSandboxId:611d1628e45eb009cba9e62cf93c15fa71178adfba162c528b63000aaf1f8dfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699398192044885374,Labels:map[string]string{io.kubernetes.cont
ainer.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kqbfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bca5551-b4bf-4b0c-b10d-497aef1406b9,},Annotations:map[string]string{io.kubernetes.container.hash: f9680e19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb5a3401e01f5edd599bdfa44be7336cbd21d9ffa5161b8cfbbf04cad014fc7,PodSandboxId:5434759d63ea8ad0b40afc40e6cae41d3cda8d0a0ecc5c10be91980d79ed5fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699398165924360151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019f4f9b0771998bde2e47be4389857e,},Annotations:map[string]string{io.kubernetes.container.hash: ba0d220e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6a0189fdfc27c0cc1a3fabe7049f5db4df527f312e2bcc19ccb8215600fd44,PodSandboxId:343f4be3c3a45c60afff6f5a0ac9d85896faf6623cf7d7af1c4a7c67ff531ae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e59
37bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699398165897302751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd3e04db3e429207bcd73816d547519,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c4971ab1e557601cc2712614c84d40e93bf0558e0195d0dbea2f8090c0ce506,PodSandboxId:c786069480c05b07d3edddee76a15e69cf91dd1b52748592b359b140ea6d5de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db
7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699398165873821745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ffdbd6969807fdb0e7422643fcd7839,},Annotations:map[string]string{io.kubernetes.container.hash: 4134d651,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790aeb63199125738e0dd7dea3e9ea80f39e4be76051d04d396cf7256c1e195c,PodSandboxId:dcda94fcb9e0824d6a6621f71c76c61cb011dd919ab937a3d9b8780b232ab247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79
315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699398165851449416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a151f3ad6f5623e92cc7f25996889001,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=51ea643c-0077-4a1a-94f8-112839c65a0e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.566006804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699398539565989754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:528627,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=f484715f-9902-4ef2-ad8f-8dfc8292cfcc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.566581192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=337ba7c1-19d0-401c-9b07-f57d75f94567 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.566622975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=337ba7c1-19d0-401c-9b07-f57d75f94567 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.566935468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a078da224a568fe0100c5dbe04eccd19c00f2a903f31cd08fe3f8f8f4cbb4da7,PodSandboxId:7d79aad0f83a51bb49d63ea68a88bc675eea5c2ea0947b398896f3d0b5e2fd49,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699398533316217715,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-6qznx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7cde473-7e11-47c0-bedc-6b20f993879a,},Annotations:map[string]string{io.kubernetes.container.hash: 9591d9b3,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19676d3928805f77a8ae3c3f9a397f3740b0c13bf01b611b9e7b912408988416,PodSandboxId:62bbbb60e5277e5fd9818ee9fe358c38fa65849b1e5e9fd3afe25f245d5dc9f4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699398390475412893,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c192263-36d7-41b2-9be7-c4e7a400b6f4,},Annotations:map[string]string{io.kubernet
es.container.hash: 22470a47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daa807d043337be68143b3c6dc307c227584db5ed3ab76b3b80414b2e48800d,PodSandboxId:d7316bd007cec9cba2dd8c30e62f67790e81c87cf22911e8e0acf148dbf412cb,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1699398367122175035,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-9rm52,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid:
a844ff26-24d3-44ff-8137-3b41431422ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2755981e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e365d007bb68b7b8725385337681cc9117c9d9161279e410b505d530678785e2,PodSandboxId:e89708924064c3bae9ed45592389a7ed0c7d7ed66b9d1b914d687162e97a812b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699398268146214579,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-8zzj7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6ad001a9-406b-41be-9838-500301f2332f,},Annotations:map[string]string{io.kubernetes.container.hash: 61ab5aa7,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64caaa8fa3d22f2c7743813ccb518e884d98c714fedc01d3e30daf773d9dd146,PodSandboxId:3babb12f480615cd1438274db42208b12a7d6b25bf9ea4b52612636efa393e14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699398
202480708632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d52d7cf-93a5-4966-8fac-41e9b9cf2556,},Annotations:map[string]string{io.kubernetes.container.hash: d56e158b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60356df05d632ba5e668483f0fe65e510a8bff86b4ce8f76e8652ae38eefc095,PodSandboxId:5ab70e5679c5d3528b53a44cdb0da1e2c5f04958e10e6deacc459cb5bff7c5f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699398199713888196,Labels:map
[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trzdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25df10a0-64b6-412c-a77e-9cd904eba85a,},Annotations:map[string]string{io.kubernetes.container.hash: 8090d184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd956c1e8bd675529941438e9a07306961d3a54d8b9217fadcdf3104b8a6518,PodSandboxId:611d1628e45eb009cba9e62cf93c15fa71178adfba162c528b63000aaf1f8dfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699398192044885374,Labels:map[string]string{io.kubernetes.cont
ainer.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kqbfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bca5551-b4bf-4b0c-b10d-497aef1406b9,},Annotations:map[string]string{io.kubernetes.container.hash: f9680e19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb5a3401e01f5edd599bdfa44be7336cbd21d9ffa5161b8cfbbf04cad014fc7,PodSandboxId:5434759d63ea8ad0b40afc40e6cae41d3cda8d0a0ecc5c10be91980d79ed5fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699398165924360151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019f4f9b0771998bde2e47be4389857e,},Annotations:map[string]string{io.kubernetes.container.hash: ba0d220e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6a0189fdfc27c0cc1a3fabe7049f5db4df527f312e2bcc19ccb8215600fd44,PodSandboxId:343f4be3c3a45c60afff6f5a0ac9d85896faf6623cf7d7af1c4a7c67ff531ae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e59
37bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699398165897302751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd3e04db3e429207bcd73816d547519,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c4971ab1e557601cc2712614c84d40e93bf0558e0195d0dbea2f8090c0ce506,PodSandboxId:c786069480c05b07d3edddee76a15e69cf91dd1b52748592b359b140ea6d5de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db
7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699398165873821745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ffdbd6969807fdb0e7422643fcd7839,},Annotations:map[string]string{io.kubernetes.container.hash: 4134d651,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790aeb63199125738e0dd7dea3e9ea80f39e4be76051d04d396cf7256c1e195c,PodSandboxId:dcda94fcb9e0824d6a6621f71c76c61cb011dd919ab937a3d9b8780b232ab247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79
315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699398165851449416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a151f3ad6f5623e92cc7f25996889001,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=337ba7c1-19d0-401c-9b07-f57d75f94567 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.602288578Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0e00ccce-da87-49a2-8bac-63400e2f2afa name=/runtime.v1.RuntimeService/Version
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.602346532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0e00ccce-da87-49a2-8bac-63400e2f2afa name=/runtime.v1.RuntimeService/Version
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.604025661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d4a36e3f-ef83-4188-b03c-514d5ff04ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.605541401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699398539605526546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:528627,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=d4a36e3f-ef83-4188-b03c-514d5ff04ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.606180174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=03d4d6f4-03ea-41fd-ac06-7be51c4557e5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.606225455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=03d4d6f4-03ea-41fd-ac06-7be51c4557e5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.606487799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a078da224a568fe0100c5dbe04eccd19c00f2a903f31cd08fe3f8f8f4cbb4da7,PodSandboxId:7d79aad0f83a51bb49d63ea68a88bc675eea5c2ea0947b398896f3d0b5e2fd49,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699398533316217715,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-6qznx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7cde473-7e11-47c0-bedc-6b20f993879a,},Annotations:map[string]string{io.kubernetes.container.hash: 9591d9b3,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19676d3928805f77a8ae3c3f9a397f3740b0c13bf01b611b9e7b912408988416,PodSandboxId:62bbbb60e5277e5fd9818ee9fe358c38fa65849b1e5e9fd3afe25f245d5dc9f4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699398390475412893,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c192263-36d7-41b2-9be7-c4e7a400b6f4,},Annotations:map[string]string{io.kubernet
es.container.hash: 22470a47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daa807d043337be68143b3c6dc307c227584db5ed3ab76b3b80414b2e48800d,PodSandboxId:d7316bd007cec9cba2dd8c30e62f67790e81c87cf22911e8e0acf148dbf412cb,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1699398367122175035,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-9rm52,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid:
a844ff26-24d3-44ff-8137-3b41431422ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2755981e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e365d007bb68b7b8725385337681cc9117c9d9161279e410b505d530678785e2,PodSandboxId:e89708924064c3bae9ed45592389a7ed0c7d7ed66b9d1b914d687162e97a812b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699398268146214579,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-8zzj7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6ad001a9-406b-41be-9838-500301f2332f,},Annotations:map[string]string{io.kubernetes.container.hash: 61ab5aa7,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64caaa8fa3d22f2c7743813ccb518e884d98c714fedc01d3e30daf773d9dd146,PodSandboxId:3babb12f480615cd1438274db42208b12a7d6b25bf9ea4b52612636efa393e14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699398
202480708632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d52d7cf-93a5-4966-8fac-41e9b9cf2556,},Annotations:map[string]string{io.kubernetes.container.hash: d56e158b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60356df05d632ba5e668483f0fe65e510a8bff86b4ce8f76e8652ae38eefc095,PodSandboxId:5ab70e5679c5d3528b53a44cdb0da1e2c5f04958e10e6deacc459cb5bff7c5f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699398199713888196,Labels:map
[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trzdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25df10a0-64b6-412c-a77e-9cd904eba85a,},Annotations:map[string]string{io.kubernetes.container.hash: 8090d184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd956c1e8bd675529941438e9a07306961d3a54d8b9217fadcdf3104b8a6518,PodSandboxId:611d1628e45eb009cba9e62cf93c15fa71178adfba162c528b63000aaf1f8dfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699398192044885374,Labels:map[string]string{io.kubernetes.cont
ainer.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kqbfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bca5551-b4bf-4b0c-b10d-497aef1406b9,},Annotations:map[string]string{io.kubernetes.container.hash: f9680e19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb5a3401e01f5edd599bdfa44be7336cbd21d9ffa5161b8cfbbf04cad014fc7,PodSandboxId:5434759d63ea8ad0b40afc40e6cae41d3cda8d0a0ecc5c10be91980d79ed5fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699398165924360151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019f4f9b0771998bde2e47be4389857e,},Annotations:map[string]string{io.kubernetes.container.hash: ba0d220e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6a0189fdfc27c0cc1a3fabe7049f5db4df527f312e2bcc19ccb8215600fd44,PodSandboxId:343f4be3c3a45c60afff6f5a0ac9d85896faf6623cf7d7af1c4a7c67ff531ae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e59
37bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699398165897302751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd3e04db3e429207bcd73816d547519,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c4971ab1e557601cc2712614c84d40e93bf0558e0195d0dbea2f8090c0ce506,PodSandboxId:c786069480c05b07d3edddee76a15e69cf91dd1b52748592b359b140ea6d5de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db
7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699398165873821745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ffdbd6969807fdb0e7422643fcd7839,},Annotations:map[string]string{io.kubernetes.container.hash: 4134d651,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790aeb63199125738e0dd7dea3e9ea80f39e4be76051d04d396cf7256c1e195c,PodSandboxId:dcda94fcb9e0824d6a6621f71c76c61cb011dd919ab937a3d9b8780b232ab247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79
315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699398165851449416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a151f3ad6f5623e92cc7f25996889001,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=03d4d6f4-03ea-41fd-ac06-7be51c4557e5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.646854822Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=952c2c68-cc56-42b8-8ba1-33c174eba5e0 name=/runtime.v1.RuntimeService/Version
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.646914303Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=952c2c68-cc56-42b8-8ba1-33c174eba5e0 name=/runtime.v1.RuntimeService/Version
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.653825763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=765c9a07-d1d3-4e61-bc90-93c0b9f8d558 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.654966970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699398539654950063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:528627,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=765c9a07-d1d3-4e61-bc90-93c0b9f8d558 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.655546238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eb220e21-2d2c-4f42-8fff-40acac5162de name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.655593275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb220e21-2d2c-4f42-8fff-40acac5162de name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:08:59 addons-245409 crio[713]: time="2023-11-07 23:08:59.655996026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a078da224a568fe0100c5dbe04eccd19c00f2a903f31cd08fe3f8f8f4cbb4da7,PodSandboxId:7d79aad0f83a51bb49d63ea68a88bc675eea5c2ea0947b398896f3d0b5e2fd49,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699398533316217715,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-6qznx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7cde473-7e11-47c0-bedc-6b20f993879a,},Annotations:map[string]string{io.kubernetes.container.hash: 9591d9b3,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19676d3928805f77a8ae3c3f9a397f3740b0c13bf01b611b9e7b912408988416,PodSandboxId:62bbbb60e5277e5fd9818ee9fe358c38fa65849b1e5e9fd3afe25f245d5dc9f4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699398390475412893,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c192263-36d7-41b2-9be7-c4e7a400b6f4,},Annotations:map[string]string{io.kubernet
es.container.hash: 22470a47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8daa807d043337be68143b3c6dc307c227584db5ed3ab76b3b80414b2e48800d,PodSandboxId:d7316bd007cec9cba2dd8c30e62f67790e81c87cf22911e8e0acf148dbf412cb,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4,State:CONTAINER_RUNNING,CreatedAt:1699398367122175035,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-94b766c-9rm52,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid:
a844ff26-24d3-44ff-8137-3b41431422ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2755981e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e365d007bb68b7b8725385337681cc9117c9d9161279e410b505d530678785e2,PodSandboxId:e89708924064c3bae9ed45592389a7ed0c7d7ed66b9d1b914d687162e97a812b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1699398268146214579,Labels:map[string]string{io.kubernetes.container.name: g
cp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-8zzj7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 6ad001a9-406b-41be-9838-500301f2332f,},Annotations:map[string]string{io.kubernetes.container.hash: 61ab5aa7,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64caaa8fa3d22f2c7743813ccb518e884d98c714fedc01d3e30daf773d9dd146,PodSandboxId:3babb12f480615cd1438274db42208b12a7d6b25bf9ea4b52612636efa393e14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699398
202480708632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d52d7cf-93a5-4966-8fac-41e9b9cf2556,},Annotations:map[string]string{io.kubernetes.container.hash: d56e158b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60356df05d632ba5e668483f0fe65e510a8bff86b4ce8f76e8652ae38eefc095,PodSandboxId:5ab70e5679c5d3528b53a44cdb0da1e2c5f04958e10e6deacc459cb5bff7c5f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699398199713888196,Labels:map
[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trzdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25df10a0-64b6-412c-a77e-9cd904eba85a,},Annotations:map[string]string{io.kubernetes.container.hash: 8090d184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd956c1e8bd675529941438e9a07306961d3a54d8b9217fadcdf3104b8a6518,PodSandboxId:611d1628e45eb009cba9e62cf93c15fa71178adfba162c528b63000aaf1f8dfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699398192044885374,Labels:map[string]string{io.kubernetes.cont
ainer.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kqbfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bca5551-b4bf-4b0c-b10d-497aef1406b9,},Annotations:map[string]string{io.kubernetes.container.hash: f9680e19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb5a3401e01f5edd599bdfa44be7336cbd21d9ffa5161b8cfbbf04cad014fc7,PodSandboxId:5434759d63ea8ad0b40afc40e6cae41d3cda8d0a0ecc5c10be91980d79ed5fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:regist
ry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699398165924360151,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019f4f9b0771998bde2e47be4389857e,},Annotations:map[string]string{io.kubernetes.container.hash: ba0d220e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6a0189fdfc27c0cc1a3fabe7049f5db4df527f312e2bcc19ccb8215600fd44,PodSandboxId:343f4be3c3a45c60afff6f5a0ac9d85896faf6623cf7d7af1c4a7c67ff531ae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e59
37bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699398165897302751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd3e04db3e429207bcd73816d547519,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c4971ab1e557601cc2712614c84d40e93bf0558e0195d0dbea2f8090c0ce506,PodSandboxId:c786069480c05b07d3edddee76a15e69cf91dd1b52748592b359b140ea6d5de5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db
7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699398165873821745,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ffdbd6969807fdb0e7422643fcd7839,},Annotations:map[string]string{io.kubernetes.container.hash: 4134d651,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790aeb63199125738e0dd7dea3e9ea80f39e4be76051d04d396cf7256c1e195c,PodSandboxId:dcda94fcb9e0824d6a6621f71c76c61cb011dd919ab937a3d9b8780b232ab247,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79
315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699398165851449416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-245409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a151f3ad6f5623e92cc7f25996889001,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb220e21-2d2c-4f42-8fff-40acac5162de name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a078da224a568       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7        6 seconds ago       Running             hello-world-app           0                   7d79aad0f83a5       hello-world-app-5d77478584-6qznx
	19676d3928805       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                2 minutes ago       Running             nginx                     0                   62bbbb60e5277       nginx
	8daa807d04333       ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4          2 minutes ago       Running             headlamp                  0                   d7316bd007cec       headlamp-94b766c-9rm52
	e365d007bb68b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06   4 minutes ago       Running             gcp-auth                  0                   e89708924064c       gcp-auth-d4c87556c-8zzj7
	64caaa8fa3d22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                               5 minutes ago       Running             storage-provisioner       0                   3babb12f48061       storage-provisioner
	60356df05d632       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                               5 minutes ago       Running             kube-proxy                0                   5ab70e5679c5d       kube-proxy-trzdn
	5bd956c1e8bd6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                               5 minutes ago       Running             coredns                   0                   611d1628e45eb       coredns-5dd5756b68-kqbfn
	ebb5a3401e01f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                               6 minutes ago       Running             etcd                      0                   5434759d63ea8       etcd-addons-245409
	be6a0189fdfc2       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                               6 minutes ago       Running             kube-scheduler            0                   343f4be3c3a45       kube-scheduler-addons-245409
	5c4971ab1e557       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                               6 minutes ago       Running             kube-apiserver            0                   c786069480c05       kube-apiserver-addons-245409
	790aeb6319912       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                               6 minutes ago       Running             kube-controller-manager   0                   dcda94fcb9e08       kube-controller-manager-addons-245409
	
	* 
	* ==> coredns [5bd956c1e8bd675529941438e9a07306961d3a54d8b9217fadcdf3104b8a6518] <==
	* [INFO] 10.244.0.5:34977 - 9156 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198481s
	[INFO] 10.244.0.5:43464 - 18720 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075731s
	[INFO] 10.244.0.5:43464 - 15651 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066555s
	[INFO] 10.244.0.5:39907 - 41555 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048639s
	[INFO] 10.244.0.5:39907 - 53293 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000125184s
	[INFO] 10.244.0.5:36737 - 62011 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000074366s
	[INFO] 10.244.0.5:36737 - 48185 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000142306s
	[INFO] 10.244.0.5:39501 - 22670 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000085289s
	[INFO] 10.244.0.5:39501 - 61322 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000050484s
	[INFO] 10.244.0.5:53517 - 1135 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041617s
	[INFO] 10.244.0.5:53517 - 45425 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059729s
	[INFO] 10.244.0.5:35162 - 65029 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036395s
	[INFO] 10.244.0.5:35162 - 1280 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000032299s
	[INFO] 10.244.0.5:35622 - 23324 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000039679s
	[INFO] 10.244.0.5:35622 - 1054 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000038029s
	[INFO] 10.244.0.19:44164 - 29203 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000390974s
	[INFO] 10.244.0.19:43012 - 65001 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000159054s
	[INFO] 10.244.0.19:44847 - 32666 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100686s
	[INFO] 10.244.0.19:54809 - 54514 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081531s
	[INFO] 10.244.0.19:41614 - 57229 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000051338s
	[INFO] 10.244.0.19:59836 - 48423 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014437s
	[INFO] 10.244.0.19:34620 - 6410 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000589635s
	[INFO] 10.244.0.19:42772 - 8975 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001420229s
	[INFO] 10.244.0.24:40987 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000573752s
	[INFO] 10.244.0.24:42593 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143592s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-245409
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-245409
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=addons-245409
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_02_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-245409
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:02:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-245409
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:08:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:06:58 +0000   Tue, 07 Nov 2023 23:02:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:06:58 +0000   Tue, 07 Nov 2023 23:02:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:06:58 +0000   Tue, 07 Nov 2023 23:02:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:06:58 +0000   Tue, 07 Nov 2023 23:02:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    addons-245409
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 428ce58a176e421e944d63581df42890
	  System UUID:                428ce58a-176e-421e-944d-63581df42890
	  Boot ID:                    cb039e06-81c7-4530-acda-5ed49209f5f4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-6qznx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  gcp-auth                    gcp-auth-d4c87556c-8zzj7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  headlamp                    headlamp-94b766c-9rm52                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 coredns-5dd5756b68-kqbfn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m53s
	  kube-system                 etcd-addons-245409                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-apiserver-addons-245409             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-controller-manager-addons-245409    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-proxy-trzdn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-scheduler-addons-245409             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m37s                  kube-proxy       
	  Normal  Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node addons-245409 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s (x8 over 6m15s)  kubelet          Node addons-245409 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node addons-245409 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m6s                   kubelet          Node addons-245409 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s                   kubelet          Node addons-245409 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s                   kubelet          Node addons-245409 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m6s                   kubelet          Node addons-245409 status is now: NodeReady
	  Normal  RegisteredNode           5m55s                  node-controller  Node addons-245409 event: Registered Node addons-245409 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.943683] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.697611] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.107890] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.148891] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.109854] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.217651] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[  +8.930867] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[  +9.270272] systemd-fstab-generator[1251]: Ignoring "noauto" for root device
	[Nov 7 23:03] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.120517] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.045134] kauditd_printk_skb: 8 callbacks suppressed
	[ +25.020035] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 7 23:04] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.462797] kauditd_printk_skb: 9 callbacks suppressed
	[Nov 7 23:05] kauditd_printk_skb: 8 callbacks suppressed
	[  +9.590975] kauditd_printk_skb: 16 callbacks suppressed
	[Nov 7 23:06] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.916001] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.631847] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.549476] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.040410] kauditd_printk_skb: 18 callbacks suppressed
	[Nov 7 23:08] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [ebb5a3401e01f5edd599bdfa44be7336cbd21d9ffa5161b8cfbbf04cad014fc7] <==
	* {"level":"warn","ts":"2023-11-07T23:04:14.280447Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-07T23:04:13.9208Z","time spent":"359.598411ms","remote":"127.0.0.1:35394","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1009 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-11-07T23:04:14.2806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.340167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2023-11-07T23:04:14.280654Z","caller":"traceutil/trace.go:171","msg":"trace[1227973697] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1011; }","duration":"150.395724ms","start":"2023-11-07T23:04:14.130252Z","end":"2023-11-07T23:04:14.280648Z","steps":["trace[1227973697] 'agreement among raft nodes before linearized reading'  (duration: 150.297901ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-07T23:04:17.307142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.543959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82214"}
	{"level":"info","ts":"2023-11-07T23:04:17.30727Z","caller":"traceutil/trace.go:171","msg":"trace[643863262] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1020; }","duration":"108.688857ms","start":"2023-11-07T23:04:17.19857Z","end":"2023-11-07T23:04:17.307259Z","steps":["trace[643863262] 'range keys from in-memory index tree'  (duration: 108.276799ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:04:17.627792Z","caller":"traceutil/trace.go:171","msg":"trace[2075003315] transaction","detail":"{read_only:false; response_revision:1021; number_of_response:1; }","duration":"313.142827ms","start":"2023-11-07T23:04:17.314635Z","end":"2023-11-07T23:04:17.627778Z","steps":["trace[2075003315] 'process raft request'  (duration: 312.971992ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-07T23:04:17.627917Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-07T23:04:17.314621Z","time spent":"313.242668ms","remote":"127.0.0.1:35372","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":796,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/registry-proxy-8s5cm.179579a6d701ee50\" mod_revision:1017 > success:<request_put:<key:\"/registry/events/kube-system/registry-proxy-8s5cm.179579a6d701ee50\" value_size:712 lease:8547422713750043093 >> failure:<request_range:<key:\"/registry/events/kube-system/registry-proxy-8s5cm.179579a6d701ee50\" > >"}
	{"level":"info","ts":"2023-11-07T23:04:17.62822Z","caller":"traceutil/trace.go:171","msg":"trace[792505679] linearizableReadLoop","detail":"{readStateIndex:1051; appliedIndex:1051; }","duration":"242.603466ms","start":"2023-11-07T23:04:17.385606Z","end":"2023-11-07T23:04:17.628209Z","steps":["trace[792505679] 'read index received'  (duration: 242.599537ms)","trace[792505679] 'applied index is now lower than readState.Index'  (duration: 3.104µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-07T23:04:17.628562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.962968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"warn","ts":"2023-11-07T23:04:17.628849Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.693917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82214"}
	{"level":"info","ts":"2023-11-07T23:04:17.62919Z","caller":"traceutil/trace.go:171","msg":"trace[1579272956] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1021; }","duration":"236.04115ms","start":"2023-11-07T23:04:17.393143Z","end":"2023-11-07T23:04:17.629184Z","steps":["trace[1579272956] 'agreement among raft nodes before linearized reading'  (duration: 235.552718ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:04:17.629103Z","caller":"traceutil/trace.go:171","msg":"trace[1870748136] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1021; }","duration":"243.509788ms","start":"2023-11-07T23:04:17.385582Z","end":"2023-11-07T23:04:17.629092Z","steps":["trace[1870748136] 'agreement among raft nodes before linearized reading'  (duration: 242.889886ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:05:35.068317Z","caller":"traceutil/trace.go:171","msg":"trace[753791536] linearizableReadLoop","detail":"{readStateIndex:1280; appliedIndex:1280; }","duration":"173.190516ms","start":"2023-11-07T23:05:34.895106Z","end":"2023-11-07T23:05:35.068297Z","steps":["trace[753791536] 'read index received'  (duration: 173.185904ms)","trace[753791536] 'applied index is now lower than readState.Index'  (duration: 3.4µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-07T23:05:35.069673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.496903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-11-07T23:05:35.069861Z","caller":"traceutil/trace.go:171","msg":"trace[1255492458] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1233; }","duration":"174.747174ms","start":"2023-11-07T23:05:34.895088Z","end":"2023-11-07T23:05:35.069835Z","steps":["trace[1255492458] 'agreement among raft nodes before linearized reading'  (duration: 173.384102ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:05:35.068071Z","caller":"traceutil/trace.go:171","msg":"trace[632570735] transaction","detail":"{read_only:false; response_revision:1233; number_of_response:1; }","duration":"175.782252ms","start":"2023-11-07T23:05:34.89226Z","end":"2023-11-07T23:05:35.068043Z","steps":["trace[632570735] 'process raft request'  (duration: 175.416222ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:06:02.732297Z","caller":"traceutil/trace.go:171","msg":"trace[843428662] linearizableReadLoop","detail":"{readStateIndex:1489; appliedIndex:1488; }","duration":"254.958828ms","start":"2023-11-07T23:06:02.477324Z","end":"2023-11-07T23:06:02.732283Z","steps":["trace[843428662] 'read index received'  (duration: 254.714216ms)","trace[843428662] 'applied index is now lower than readState.Index'  (duration: 244.118µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-07T23:06:02.732497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.203323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3738"}
	{"level":"info","ts":"2023-11-07T23:06:02.732544Z","caller":"traceutil/trace.go:171","msg":"trace[182727175] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1433; }","duration":"255.263232ms","start":"2023-11-07T23:06:02.477272Z","end":"2023-11-07T23:06:02.732535Z","steps":["trace[182727175] 'agreement among raft nodes before linearized reading'  (duration: 255.169564ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:06:02.73265Z","caller":"traceutil/trace.go:171","msg":"trace[2082475502] transaction","detail":"{read_only:false; response_revision:1433; number_of_response:1; }","duration":"269.64448ms","start":"2023-11-07T23:06:02.462992Z","end":"2023-11-07T23:06:02.732636Z","steps":["trace[2082475502] 'process raft request'  (duration: 269.162981ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:06:04.016784Z","caller":"traceutil/trace.go:171","msg":"trace[1522863634] transaction","detail":"{read_only:false; response_revision:1440; number_of_response:1; }","duration":"168.158126ms","start":"2023-11-07T23:06:03.848611Z","end":"2023-11-07T23:06:04.016769Z","steps":["trace[1522863634] 'process raft request'  (duration: 167.890429ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-07T23:06:06.6267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.399047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/local-path-storage/local-path-provisioner-78b46b4d5c-w42h8.179579b0ca84bd2d\" ","response":"range_response_count:1 size:989"}
	{"level":"info","ts":"2023-11-07T23:06:06.626868Z","caller":"traceutil/trace.go:171","msg":"trace[117384754] range","detail":"{range_begin:/registry/events/local-path-storage/local-path-provisioner-78b46b4d5c-w42h8.179579b0ca84bd2d; range_end:; response_count:1; response_revision:1478; }","duration":"112.581238ms","start":"2023-11-07T23:06:06.514271Z","end":"2023-11-07T23:06:06.626852Z","steps":["trace[117384754] 'range keys from in-memory index tree'  (duration: 112.057071ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:06:06.87804Z","caller":"traceutil/trace.go:171","msg":"trace[1170783910] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1479; }","duration":"247.938977ms","start":"2023-11-07T23:06:06.630087Z","end":"2023-11-07T23:06:06.878026Z","steps":["trace[1170783910] 'process raft request'  (duration: 247.834245ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:06:53.995653Z","caller":"traceutil/trace.go:171","msg":"trace[888324262] transaction","detail":"{read_only:false; response_revision:1802; number_of_response:1; }","duration":"110.713173ms","start":"2023-11-07T23:06:53.884904Z","end":"2023-11-07T23:06:53.995617Z","steps":["trace[888324262] 'process raft request'  (duration: 110.605371ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [e365d007bb68b7b8725385337681cc9117c9d9161279e410b505d530678785e2] <==
	* 2023/11/07 23:04:28 GCP Auth Webhook started!
	2023/11/07 23:05:39 Ready to marshal response ...
	2023/11/07 23:05:39 Ready to write response ...
	2023/11/07 23:05:39 Ready to marshal response ...
	2023/11/07 23:05:39 Ready to write response ...
	2023/11/07 23:05:43 Ready to marshal response ...
	2023/11/07 23:05:43 Ready to write response ...
	2023/11/07 23:05:49 Ready to marshal response ...
	2023/11/07 23:05:49 Ready to write response ...
	2023/11/07 23:05:53 Ready to marshal response ...
	2023/11/07 23:05:53 Ready to write response ...
	2023/11/07 23:05:53 Ready to marshal response ...
	2023/11/07 23:05:53 Ready to write response ...
	2023/11/07 23:05:53 Ready to marshal response ...
	2023/11/07 23:05:53 Ready to write response ...
	2023/11/07 23:06:00 Ready to marshal response ...
	2023/11/07 23:06:00 Ready to write response ...
	2023/11/07 23:06:15 Ready to marshal response ...
	2023/11/07 23:06:15 Ready to write response ...
	2023/11/07 23:06:18 Ready to marshal response ...
	2023/11/07 23:06:18 Ready to write response ...
	2023/11/07 23:06:23 Ready to marshal response ...
	2023/11/07 23:06:23 Ready to write response ...
	2023/11/07 23:08:49 Ready to marshal response ...
	2023/11/07 23:08:49 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:09:00 up 6 min,  0 users,  load average: 0.71, 1.24, 0.74
	Linux addons-245409 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5c4971ab1e557601cc2712614c84d40e93bf0558e0195d0dbea2f8090c0ce506] <==
	* E1107 23:06:16.603070       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1107 23:06:20.278014       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1107 23:06:20.308318       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1107 23:06:21.362045       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1107 23:06:23.623915       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1107 23:06:23.862850       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.159.11"}
	I1107 23:06:37.098779       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:06:37.098864       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:06:37.127289       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:06:37.127358       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:06:37.135002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:06:37.135070       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:06:37.166002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:06:37.166065       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:06:37.190261       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:06:37.190331       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:06:37.216922       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:06:37.217077       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:06:37.223055       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:06:37.223088       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1107 23:06:38.166421       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1107 23:06:38.216998       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1107 23:06:38.289195       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1107 23:06:53.070087       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1107 23:08:49.635548       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.178.150"}
	
	* 
	* ==> kube-controller-manager [790aeb63199125738e0dd7dea3e9ea80f39e4be76051d04d396cf7256c1e195c] <==
	* W1107 23:07:44.573603       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:07:44.573779       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:07:56.072184       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:07:56.072298       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:08:06.345544       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:08:06.345689       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:08:19.066264       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:08:19.066335       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:08:34.007853       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:08:34.007898       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:08:46.953871       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:08:46.953924       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1107 23:08:49.381309       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1107 23:08:49.412532       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-6qznx"
	I1107 23:08:49.422822       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.127142ms"
	I1107 23:08:49.441304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="18.376752ms"
	I1107 23:08:49.441454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="52.703µs"
	I1107 23:08:49.460836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="134.936µs"
	I1107 23:08:51.599665       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1107 23:08:51.634399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="7.44µs"
	I1107 23:08:51.649695       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1107 23:08:51.933325       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:08:51.933452       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1107 23:08:53.944911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.61484ms"
	I1107 23:08:53.945042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.973µs"
	
	* 
	* ==> kube-proxy [60356df05d632ba5e668483f0fe65e510a8bff86b4ce8f76e8652ae38eefc095] <==
	* I1107 23:03:21.096909       1 server_others.go:69] "Using iptables proxy"
	I1107 23:03:21.358954       1 node.go:141] Successfully retrieved node IP: 192.168.39.205
	I1107 23:03:21.868255       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1107 23:03:21.868300       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1107 23:03:21.934386       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:03:21.943098       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:03:21.943288       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:03:21.943298       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:03:21.968019       1 config.go:188] "Starting service config controller"
	I1107 23:03:21.968063       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:03:21.968089       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:03:21.968092       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:03:21.986606       1 config.go:315] "Starting node config controller"
	I1107 23:03:21.986762       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:03:22.072319       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1107 23:03:22.072432       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:03:22.100308       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [be6a0189fdfc27c0cc1a3fabe7049f5db4df527f312e2bcc19ccb8215600fd44] <==
	* W1107 23:02:49.916229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:02:49.916237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 23:02:49.916278       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:02:49.916286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:02:49.916330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1107 23:02:49.916337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 23:02:49.916384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:02:49.916391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1107 23:02:49.916436       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:02:49.916445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1107 23:02:50.739067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:02:50.739131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1107 23:02:50.814983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1107 23:02:50.815034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 23:02:50.958418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:02:50.958465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:02:50.970308       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1107 23:02:50.970363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1107 23:02:51.059010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:02:51.059064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1107 23:02:51.123678       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:02:51.123799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1107 23:02:51.279235       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 23:02:51.279289       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1107 23:02:54.202199       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-07 23:02:23 UTC, ends at Tue 2023-11-07 23:09:00 UTC. --
	Nov 07 23:08:50 addons-245409 kubelet[1258]: I1107 23:08:50.795857    1258 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrzq6\" (UniqueName: \"kubernetes.io/projected/9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee-kube-api-access-lrzq6\") pod \"9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee\" (UID: \"9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee\") "
	Nov 07 23:08:50 addons-245409 kubelet[1258]: I1107 23:08:50.801593    1258 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee-kube-api-access-lrzq6" (OuterVolumeSpecName: "kube-api-access-lrzq6") pod "9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee" (UID: "9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee"). InnerVolumeSpecName "kube-api-access-lrzq6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 07 23:08:50 addons-245409 kubelet[1258]: I1107 23:08:50.891105    1258 scope.go:117] "RemoveContainer" containerID="ba87007842b09743e7da386fdb053aa892566f04f2b792b8c7675265d52dabe0"
	Nov 07 23:08:50 addons-245409 kubelet[1258]: I1107 23:08:50.896652    1258 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lrzq6\" (UniqueName: \"kubernetes.io/projected/9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee-kube-api-access-lrzq6\") on node \"addons-245409\" DevicePath \"\""
	Nov 07 23:08:50 addons-245409 kubelet[1258]: I1107 23:08:50.928877    1258 scope.go:117] "RemoveContainer" containerID="ba87007842b09743e7da386fdb053aa892566f04f2b792b8c7675265d52dabe0"
	Nov 07 23:08:50 addons-245409 kubelet[1258]: E1107 23:08:50.929584    1258 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba87007842b09743e7da386fdb053aa892566f04f2b792b8c7675265d52dabe0\": container with ID starting with ba87007842b09743e7da386fdb053aa892566f04f2b792b8c7675265d52dabe0 not found: ID does not exist" containerID="ba87007842b09743e7da386fdb053aa892566f04f2b792b8c7675265d52dabe0"
	Nov 07 23:08:50 addons-245409 kubelet[1258]: I1107 23:08:50.929627    1258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba87007842b09743e7da386fdb053aa892566f04f2b792b8c7675265d52dabe0"} err="failed to get container status \"ba87007842b09743e7da386fdb053aa892566f04f2b792b8c7675265d52dabe0\": rpc error: code = NotFound desc = could not find container \"ba87007842b09743e7da386fdb053aa892566f04f2b792b8c7675265d52dabe0\": container with ID starting with ba87007842b09743e7da386fdb053aa892566f04f2b792b8c7675265d52dabe0 not found: ID does not exist"
	Nov 07 23:08:51 addons-245409 kubelet[1258]: I1107 23:08:51.566613    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee" path="/var/lib/kubelet/pods/9c3a54b4-2b64-4913-8bbd-2b6594c7b5ee/volumes"
	Nov 07 23:08:53 addons-245409 kubelet[1258]: I1107 23:08:53.567824    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0aa98b08-b4d8-477d-8e89-d119b686bd38" path="/var/lib/kubelet/pods/0aa98b08-b4d8-477d-8e89-d119b686bd38/volumes"
	Nov 07 23:08:53 addons-245409 kubelet[1258]: I1107 23:08:53.568653    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cad957d4-12b6-4c95-b0d8-7965fd5231c0" path="/var/lib/kubelet/pods/cad957d4-12b6-4c95-b0d8-7965fd5231c0/volumes"
	Nov 07 23:08:53 addons-245409 kubelet[1258]: E1107 23:08:53.627146    1258 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 07 23:08:53 addons-245409 kubelet[1258]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 07 23:08:53 addons-245409 kubelet[1258]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 07 23:08:53 addons-245409 kubelet[1258]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 07 23:08:53 addons-245409 kubelet[1258]: I1107 23:08:53.935775    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-6qznx" podStartSLOduration=2.242968698 podCreationTimestamp="2023-11-07 23:08:49 +0000 UTC" firstStartedPulling="2023-11-07 23:08:50.601541883 +0000 UTC m=+357.203655480" lastFinishedPulling="2023-11-07 23:08:53.294231004 +0000 UTC m=+359.896344601" observedRunningTime="2023-11-07 23:08:53.935490883 +0000 UTC m=+360.537604500" watchObservedRunningTime="2023-11-07 23:08:53.935657819 +0000 UTC m=+360.537771435"
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.029298    1258 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc-webhook-cert\") pod \"918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc\" (UID: \"918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc\") "
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.029352    1258 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqltn\" (UniqueName: \"kubernetes.io/projected/918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc-kube-api-access-cqltn\") pod \"918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc\" (UID: \"918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc\") "
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.033100    1258 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc-kube-api-access-cqltn" (OuterVolumeSpecName: "kube-api-access-cqltn") pod "918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc" (UID: "918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc"). InnerVolumeSpecName "kube-api-access-cqltn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.033565    1258 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc" (UID: "918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.130312    1258 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc-webhook-cert\") on node \"addons-245409\" DevicePath \"\""
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.130415    1258 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cqltn\" (UniqueName: \"kubernetes.io/projected/918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc-kube-api-access-cqltn\") on node \"addons-245409\" DevicePath \"\""
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.394063    1258 scope.go:117] "RemoveContainer" containerID="b5c29571fdbdad4c496bd4e17747cbae67ae78258fd0af82605dd17229eb2e25"
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.418174    1258 scope.go:117] "RemoveContainer" containerID="e055e24751aecddf405adc7fd88901ff32126cb9bd4ff018e63f8293550cf759"
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.445025    1258 scope.go:117] "RemoveContainer" containerID="e24a6231ff208c6c6267478f18e66ee39776b795f401174b3665ac0c3ed29377"
	Nov 07 23:08:55 addons-245409 kubelet[1258]: I1107 23:08:55.566929    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc" path="/var/lib/kubelet/pods/918fa0ed-85f2-4ebb-8680-cf9b9d08b3bc/volumes"
	
	* 
	* ==> storage-provisioner [64caaa8fa3d22f2c7743813ccb518e884d98c714fedc01d3e30daf773d9dd146] <==
	* I1107 23:03:23.359496       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 23:03:23.405905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 23:03:23.413431       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 23:03:23.436822       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 23:03:23.437020       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-245409_d8c26ed7-9e1b-478c-b04f-6eff103fd657!
	I1107 23:03:23.445273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7dfb431-b20f-44ef-8db3-cacfee3975ff", APIVersion:"v1", ResourceVersion:"853", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-245409_d8c26ed7-9e1b-478c-b04f-6eff103fd657 became leader
	I1107 23:03:23.538130       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-245409_d8c26ed7-9e1b-478c-b04f-6eff103fd657!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-245409 -n addons-245409
helpers_test.go:261: (dbg) Run:  kubectl --context addons-245409 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (158.06s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-245409
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-245409: exit status 82 (2m1.64249222s)

                                                
                                                
-- stdout --
	* Stopping node "addons-245409"  ...
	* Stopping node "addons-245409"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-245409" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-245409
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-245409: exit status 11 (21.551126939s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-245409" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-245409
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-245409: exit status 11 (6.145999803s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-245409" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-245409
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-245409: exit status 11 (6.141858749s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-245409" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (278.614776ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image load --daemon gcr.io/google-containers/addon-resizer:functional-514284 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 image load --daemon gcr.io/google-containers/addon-resizer:functional-514284 --alsologtostderr: (5.104193868s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 image ls: (2.347416647s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-514284" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (172.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-823610 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-823610 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.189097994s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-823610 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-823610 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d0ab4206-2f23-414e-916b-c9f8899844cb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d0ab4206-2f23-414e-916b-c9f8899844cb] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.137130018s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823610 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1107 23:20:38.956312   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:20:42.436954   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:42.442212   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:42.452475   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:42.472750   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:42.513032   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:42.593367   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:42.753773   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:43.074311   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:43.715212   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:44.995965   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:47.557740   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:20:52.678006   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:21:02.918716   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:21:06.641904   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:21:23.399783   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-823610 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.842168535s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-823610 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823610 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.221
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823610 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-823610 addons disable ingress-dns --alsologtostderr -v=1: (5.338079193s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823610 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-823610 addons disable ingress --alsologtostderr -v=1: (7.542822378s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-823610 -n ingress-addon-legacy-823610
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823610 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-823610 logs -n 25: (1.098028375s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                   | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | -p functional-514284                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| service        | functional-514284 service            | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | hello-node-connect --url             |                             |         |         |                     |                     |
	| service        | functional-514284 service list       | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	| update-context | functional-514284                    | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-514284                    | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-514284                    | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-514284                    | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-514284                    | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| service        | functional-514284 service list       | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | -o json                              |                             |         |         |                     |                     |
	| ssh            | functional-514284 ssh pgrep          | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-514284 image build -t     | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | localhost/my-image:functional-514284 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| service        | functional-514284 service            | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| service        | functional-514284                    | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| service        | functional-514284 service            | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| image          | functional-514284                    | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-514284                    | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-514284 image ls           | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	| delete         | -p functional-514284                 | functional-514284           | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	| start          | -p ingress-addon-legacy-823610       | ingress-addon-legacy-823610 | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:18 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-823610          | ingress-addon-legacy-823610 | jenkins | v1.32.0 | 07 Nov 23 23:18 UTC | 07 Nov 23 23:18 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-823610          | ingress-addon-legacy-823610 | jenkins | v1.32.0 | 07 Nov 23 23:18 UTC | 07 Nov 23 23:18 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-823610          | ingress-addon-legacy-823610 | jenkins | v1.32.0 | 07 Nov 23 23:19 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-823610 ip       | ingress-addon-legacy-823610 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	| addons         | ingress-addon-legacy-823610          | ingress-addon-legacy-823610 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-823610          | ingress-addon-legacy-823610 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:16:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:16:45.158864   25442 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:16:45.159112   25442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:16:45.159120   25442 out.go:309] Setting ErrFile to fd 2...
	I1107 23:16:45.159125   25442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:16:45.159324   25442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1107 23:16:45.159861   25442 out.go:303] Setting JSON to false
	I1107 23:16:45.160664   25442 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3554,"bootTime":1699395451,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:16:45.160715   25442 start.go:138] virtualization: kvm guest
	I1107 23:16:45.163084   25442 out.go:177] * [ingress-addon-legacy-823610] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:16:45.164575   25442 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:16:45.164583   25442 notify.go:220] Checking for updates...
	I1107 23:16:45.166995   25442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:16:45.168345   25442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:16:45.169575   25442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:16:45.170771   25442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:16:45.171961   25442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:16:45.173371   25442 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:16:45.206640   25442 out.go:177] * Using the kvm2 driver based on user configuration
	I1107 23:16:45.207889   25442 start.go:298] selected driver: kvm2
	I1107 23:16:45.207903   25442 start.go:902] validating driver "kvm2" against <nil>
	I1107 23:16:45.207913   25442 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:16:45.208556   25442 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:16:45.208638   25442 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:16:45.222717   25442 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:16:45.222765   25442 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:16:45.222942   25442 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:16:45.222993   25442 cni.go:84] Creating CNI manager for ""
	I1107 23:16:45.223005   25442 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:16:45.223014   25442 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1107 23:16:45.223022   25442 start_flags.go:323] config:
	{Name:ingress-addon-legacy-823610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-823610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:16:45.223134   25442 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:16:45.224920   25442 out.go:177] * Starting control plane node ingress-addon-legacy-823610 in cluster ingress-addon-legacy-823610
	I1107 23:16:45.226279   25442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:16:45.730679   25442 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1107 23:16:45.730722   25442 cache.go:56] Caching tarball of preloaded images
	I1107 23:16:45.730873   25442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:16:45.732896   25442 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1107 23:16:45.734168   25442 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:16:45.848988   25442 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1107 23:16:59.372967   25442 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:16:59.373055   25442 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:17:00.352410   25442 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1107 23:17:00.352796   25442 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/config.json ...
	I1107 23:17:00.352847   25442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/config.json: {Name:mkc177669fffcb873dc02af312cab23c54c16b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:17:00.353032   25442 start.go:365] acquiring machines lock for ingress-addon-legacy-823610: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:17:00.353069   25442 start.go:369] acquired machines lock for "ingress-addon-legacy-823610" in 17.894µs
	I1107 23:17:00.353083   25442 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-823610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName
:ingress-addon-legacy-823610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:17:00.353151   25442 start.go:125] createHost starting for "" (driver="kvm2")
	I1107 23:17:00.355873   25442 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1107 23:17:00.356022   25442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:17:00.356065   25442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:17:00.369778   25442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I1107 23:17:00.370219   25442 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:17:00.370747   25442 main.go:141] libmachine: Using API Version  1
	I1107 23:17:00.370766   25442 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:17:00.371102   25442 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:17:00.371285   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetMachineName
	I1107 23:17:00.371451   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:17:00.371647   25442 start.go:159] libmachine.API.Create for "ingress-addon-legacy-823610" (driver="kvm2")
	I1107 23:17:00.371681   25442 client.go:168] LocalClient.Create starting
	I1107 23:17:00.371713   25442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem
	I1107 23:17:00.371753   25442 main.go:141] libmachine: Decoding PEM data...
	I1107 23:17:00.371772   25442 main.go:141] libmachine: Parsing certificate...
	I1107 23:17:00.371834   25442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem
	I1107 23:17:00.371865   25442 main.go:141] libmachine: Decoding PEM data...
	I1107 23:17:00.371880   25442 main.go:141] libmachine: Parsing certificate...
	I1107 23:17:00.371907   25442 main.go:141] libmachine: Running pre-create checks...
	I1107 23:17:00.371922   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .PreCreateCheck
	I1107 23:17:00.372265   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetConfigRaw
	I1107 23:17:00.372678   25442 main.go:141] libmachine: Creating machine...
	I1107 23:17:00.372697   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .Create
	I1107 23:17:00.372850   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Creating KVM machine...
	I1107 23:17:00.374013   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found existing default KVM network
	I1107 23:17:00.374673   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:00.374520   25500 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I1107 23:17:00.380268   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | trying to create private KVM network mk-ingress-addon-legacy-823610 192.168.39.0/24...
	I1107 23:17:00.446209   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Setting up store path in /home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610 ...
	I1107 23:17:00.446264   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Building disk image from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1107 23:17:00.446277   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | private KVM network mk-ingress-addon-legacy-823610 192.168.39.0/24 created
	I1107 23:17:00.446305   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Downloading /home/jenkins/minikube-integration/17585-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1107 23:17:00.446325   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:00.446150   25500 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:17:00.643979   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:00.643837   25500 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/id_rsa...
	I1107 23:17:00.782919   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:00.782792   25500 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/ingress-addon-legacy-823610.rawdisk...
	I1107 23:17:00.782948   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Writing magic tar header
	I1107 23:17:00.782960   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Writing SSH key tar header
	I1107 23:17:00.782969   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:00.782902   25500 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610 ...
	I1107 23:17:00.783001   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610
	I1107 23:17:00.783053   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610 (perms=drwx------)
	I1107 23:17:00.783081   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines (perms=drwxr-xr-x)
	I1107 23:17:00.783102   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube (perms=drwxr-xr-x)
	I1107 23:17:00.783114   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines
	I1107 23:17:00.783129   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:17:00.783148   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647
	I1107 23:17:00.783164   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647 (perms=drwxrwxr-x)
	I1107 23:17:00.783180   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1107 23:17:00.783193   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1107 23:17:00.783206   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Creating domain...
	I1107 23:17:00.783223   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1107 23:17:00.783238   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Checking permissions on dir: /home/jenkins
	I1107 23:17:00.783263   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Checking permissions on dir: /home
	I1107 23:17:00.783286   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Skipping /home - not owner
	I1107 23:17:00.784276   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) define libvirt domain using xml: 
	I1107 23:17:00.784302   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) <domain type='kvm'>
	I1107 23:17:00.784329   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   <name>ingress-addon-legacy-823610</name>
	I1107 23:17:00.784344   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   <memory unit='MiB'>4096</memory>
	I1107 23:17:00.784361   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   <vcpu>2</vcpu>
	I1107 23:17:00.784369   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   <features>
	I1107 23:17:00.784376   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <acpi/>
	I1107 23:17:00.784384   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <apic/>
	I1107 23:17:00.784390   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <pae/>
	I1107 23:17:00.784398   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     
	I1107 23:17:00.784406   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   </features>
	I1107 23:17:00.784418   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   <cpu mode='host-passthrough'>
	I1107 23:17:00.784433   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   
	I1107 23:17:00.784447   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   </cpu>
	I1107 23:17:00.784460   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   <os>
	I1107 23:17:00.784475   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <type>hvm</type>
	I1107 23:17:00.784484   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <boot dev='cdrom'/>
	I1107 23:17:00.784490   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <boot dev='hd'/>
	I1107 23:17:00.784498   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <bootmenu enable='no'/>
	I1107 23:17:00.784504   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   </os>
	I1107 23:17:00.784515   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   <devices>
	I1107 23:17:00.784531   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <disk type='file' device='cdrom'>
	I1107 23:17:00.784549   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/boot2docker.iso'/>
	I1107 23:17:00.784564   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <target dev='hdc' bus='scsi'/>
	I1107 23:17:00.784575   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <readonly/>
	I1107 23:17:00.784586   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     </disk>
	I1107 23:17:00.784592   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <disk type='file' device='disk'>
	I1107 23:17:00.784606   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1107 23:17:00.784629   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/ingress-addon-legacy-823610.rawdisk'/>
	I1107 23:17:00.784654   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <target dev='hda' bus='virtio'/>
	I1107 23:17:00.784663   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     </disk>
	I1107 23:17:00.784675   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <interface type='network'>
	I1107 23:17:00.784691   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <source network='mk-ingress-addon-legacy-823610'/>
	I1107 23:17:00.784705   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <model type='virtio'/>
	I1107 23:17:00.784717   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     </interface>
	I1107 23:17:00.784730   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <interface type='network'>
	I1107 23:17:00.784741   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <source network='default'/>
	I1107 23:17:00.784755   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <model type='virtio'/>
	I1107 23:17:00.784772   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     </interface>
	I1107 23:17:00.784787   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <serial type='pty'>
	I1107 23:17:00.784802   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <target port='0'/>
	I1107 23:17:00.784829   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     </serial>
	I1107 23:17:00.784848   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <console type='pty'>
	I1107 23:17:00.784871   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <target type='serial' port='0'/>
	I1107 23:17:00.784883   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     </console>
	I1107 23:17:00.784896   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     <rng model='virtio'>
	I1107 23:17:00.784909   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)       <backend model='random'>/dev/random</backend>
	I1107 23:17:00.784922   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     </rng>
	I1107 23:17:00.784948   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     
	I1107 23:17:00.784969   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)     
	I1107 23:17:00.784984   25442 main.go:141] libmachine: (ingress-addon-legacy-823610)   </devices>
	I1107 23:17:00.784996   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) </domain>
	I1107 23:17:00.785017   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) 
	I1107 23:17:00.789543   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:a0:bd:75 in network default
	I1107 23:17:00.790085   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Ensuring networks are active...
	I1107 23:17:00.790105   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:00.790807   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Ensuring network default is active
	I1107 23:17:00.791076   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Ensuring network mk-ingress-addon-legacy-823610 is active
	I1107 23:17:00.791568   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Getting domain xml...
	I1107 23:17:00.792162   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Creating domain...
	I1107 23:17:01.998067   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Waiting to get IP...
	I1107 23:17:02.000020   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:02.000412   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:02.000446   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:02.000375   25500 retry.go:31] will retry after 263.216286ms: waiting for machine to come up
	I1107 23:17:02.265020   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:02.265663   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:02.265693   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:02.265581   25500 retry.go:31] will retry after 346.793438ms: waiting for machine to come up
	I1107 23:17:02.614088   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:02.614590   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:02.614630   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:02.614545   25500 retry.go:31] will retry after 422.411167ms: waiting for machine to come up
	I1107 23:17:03.038057   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:03.038617   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:03.038645   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:03.038560   25500 retry.go:31] will retry after 477.596228ms: waiting for machine to come up
	I1107 23:17:03.518307   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:03.518788   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:03.518823   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:03.518738   25500 retry.go:31] will retry after 709.654646ms: waiting for machine to come up
	I1107 23:17:04.229759   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:04.230228   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:04.230257   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:04.230189   25500 retry.go:31] will retry after 681.894925ms: waiting for machine to come up
	I1107 23:17:04.914106   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:04.914507   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:04.914535   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:04.914461   25500 retry.go:31] will retry after 787.939092ms: waiting for machine to come up
	I1107 23:17:05.704419   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:05.704875   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:05.704905   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:05.704839   25500 retry.go:31] will retry after 1.010739067s: waiting for machine to come up
	I1107 23:17:06.717081   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:06.717574   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:06.717610   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:06.717545   25500 retry.go:31] will retry after 1.686813s: waiting for machine to come up
	I1107 23:17:08.405570   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:08.406004   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:08.406036   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:08.405944   25500 retry.go:31] will retry after 1.938682055s: waiting for machine to come up
	I1107 23:17:10.347090   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:10.347520   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:10.347543   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:10.347469   25500 retry.go:31] will retry after 2.374092578s: waiting for machine to come up
	I1107 23:17:12.724404   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:12.724889   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:12.724938   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:12.724826   25500 retry.go:31] will retry after 3.561114739s: waiting for machine to come up
	I1107 23:17:16.287245   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:16.287666   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:16.287694   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:16.287650   25500 retry.go:31] will retry after 2.982209452s: waiting for machine to come up
	I1107 23:17:19.273844   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:19.274397   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find current IP address of domain ingress-addon-legacy-823610 in network mk-ingress-addon-legacy-823610
	I1107 23:17:19.274426   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | I1107 23:17:19.274340   25500 retry.go:31] will retry after 4.153527066s: waiting for machine to come up
	I1107 23:17:23.431338   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.431714   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Found IP for machine: 192.168.39.221
	I1107 23:17:23.431727   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Reserving static IP address...
	I1107 23:17:23.431738   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has current primary IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.432128   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-823610", mac: "52:54:00:4c:2d:0c", ip: "192.168.39.221"} in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.501603   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Getting to WaitForSSH function...
	I1107 23:17:23.501642   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Reserved static IP address: 192.168.39.221
	I1107 23:17:23.501659   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Waiting for SSH to be available...
	I1107 23:17:23.504220   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.504637   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:23.504672   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.504787   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Using SSH client type: external
	I1107 23:17:23.504828   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/id_rsa (-rw-------)
	I1107 23:17:23.504867   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1107 23:17:23.504889   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | About to run SSH command:
	I1107 23:17:23.504909   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | exit 0
	I1107 23:17:23.600722   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | SSH cmd err, output: <nil>: 
	I1107 23:17:23.600980   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) KVM machine creation complete!
	I1107 23:17:23.601281   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetConfigRaw
	I1107 23:17:23.601825   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:17:23.602040   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:17:23.602201   25442 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1107 23:17:23.602216   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetState
	I1107 23:17:23.603358   25442 main.go:141] libmachine: Detecting operating system of created instance...
	I1107 23:17:23.603372   25442 main.go:141] libmachine: Waiting for SSH to be available...
	I1107 23:17:23.603377   25442 main.go:141] libmachine: Getting to WaitForSSH function...
	I1107 23:17:23.603384   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:23.605407   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.605809   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:23.605843   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.605939   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:23.606107   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:23.606312   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:23.606468   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:23.606621   25442 main.go:141] libmachine: Using SSH client type: native
	I1107 23:17:23.606944   25442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1107 23:17:23.606955   25442 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1107 23:17:23.731859   25442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:17:23.731886   25442 main.go:141] libmachine: Detecting the provisioner...
	I1107 23:17:23.731898   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:23.734801   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.735143   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:23.735176   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.735294   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:23.735479   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:23.735632   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:23.735789   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:23.735994   25442 main.go:141] libmachine: Using SSH client type: native
	I1107 23:17:23.736481   25442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1107 23:17:23.736500   25442 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1107 23:17:23.861495   25442 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb75713b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1107 23:17:23.861576   25442 main.go:141] libmachine: found compatible host: buildroot
	I1107 23:17:23.861591   25442 main.go:141] libmachine: Provisioning with buildroot...
	I1107 23:17:23.861603   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetMachineName
	I1107 23:17:23.861803   25442 buildroot.go:166] provisioning hostname "ingress-addon-legacy-823610"
	I1107 23:17:23.861827   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetMachineName
	I1107 23:17:23.862017   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:23.864570   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.864878   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:23.864909   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:23.865041   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:23.865208   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:23.865318   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:23.865457   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:23.865606   25442 main.go:141] libmachine: Using SSH client type: native
	I1107 23:17:23.866032   25442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1107 23:17:23.866053   25442 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-823610 && echo "ingress-addon-legacy-823610" | sudo tee /etc/hostname
	I1107 23:17:24.005101   25442 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-823610
	
	I1107 23:17:24.005152   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:24.007869   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.008147   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:24.008176   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.008299   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:24.008499   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:24.008641   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:24.008806   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:24.008984   25442 main.go:141] libmachine: Using SSH client type: native
	I1107 23:17:24.009319   25442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1107 23:17:24.009347   25442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-823610' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-823610/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-823610' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:17:24.140458   25442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:17:24.140483   25442 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1107 23:17:24.140499   25442 buildroot.go:174] setting up certificates
	I1107 23:17:24.140509   25442 provision.go:83] configureAuth start
	I1107 23:17:24.140517   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetMachineName
	I1107 23:17:24.140772   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetIP
	I1107 23:17:24.143491   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.143846   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:24.143867   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.144008   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:24.146426   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.146746   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:24.146775   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.146871   25442 provision.go:138] copyHostCerts
	I1107 23:17:24.146909   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:17:24.146943   25442 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1107 23:17:24.146953   25442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:17:24.147024   25442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1107 23:17:24.147100   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:17:24.147116   25442 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1107 23:17:24.147119   25442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:17:24.147142   25442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1107 23:17:24.147182   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:17:24.147197   25442 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1107 23:17:24.147203   25442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:17:24.147222   25442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1107 23:17:24.147264   25442 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-823610 san=[192.168.39.221 192.168.39.221 localhost 127.0.0.1 minikube ingress-addon-legacy-823610]
	I1107 23:17:24.367180   25442 provision.go:172] copyRemoteCerts
	I1107 23:17:24.367230   25442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:17:24.367258   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:24.369841   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.370189   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:24.370211   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.370426   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:24.370643   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:24.370797   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:24.370955   25442 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/id_rsa Username:docker}
	I1107 23:17:24.463028   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:17:24.463098   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:17:24.485932   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:17:24.485996   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:17:24.507390   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:17:24.507449   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1107 23:17:24.529133   25442 provision.go:86] duration metric: configureAuth took 388.610984ms
	I1107 23:17:24.529167   25442 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:17:24.529357   25442 config.go:182] Loaded profile config "ingress-addon-legacy-823610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1107 23:17:24.529430   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:24.531949   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.532314   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:24.532351   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.532519   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:24.532717   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:24.532898   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:24.533049   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:24.533203   25442 main.go:141] libmachine: Using SSH client type: native
	I1107 23:17:24.533666   25442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1107 23:17:24.533692   25442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:17:24.843561   25442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:17:24.843585   25442 main.go:141] libmachine: Checking connection to Docker...
	I1107 23:17:24.843594   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetURL
	I1107 23:17:24.844865   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Using libvirt version 6000000
	I1107 23:17:24.847387   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.847741   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:24.847772   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.847924   25442 main.go:141] libmachine: Docker is up and running!
	I1107 23:17:24.847940   25442 main.go:141] libmachine: Reticulating splines...
	I1107 23:17:24.847949   25442 client.go:171] LocalClient.Create took 24.476260486s
	I1107 23:17:24.847985   25442 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-823610" took 24.476330123s
	I1107 23:17:24.847997   25442 start.go:300] post-start starting for "ingress-addon-legacy-823610" (driver="kvm2")
	I1107 23:17:24.848010   25442 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:17:24.848032   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:17:24.848273   25442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:17:24.848294   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:24.850357   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.850735   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:24.850756   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.850888   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:24.851038   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:24.851172   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:24.851292   25442 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/id_rsa Username:docker}
	I1107 23:17:24.943134   25442 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:17:24.947550   25442 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:17:24.947572   25442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1107 23:17:24.947632   25442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1107 23:17:24.947706   25442 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1107 23:17:24.947716   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /etc/ssl/certs/168482.pem
	I1107 23:17:24.947803   25442 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:17:24.957312   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:17:24.979845   25442 start.go:303] post-start completed in 131.834844ms
	I1107 23:17:24.979896   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetConfigRaw
	I1107 23:17:24.980474   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetIP
	I1107 23:17:24.982979   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.983307   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:24.983329   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.983559   25442 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/config.json ...
	I1107 23:17:24.983740   25442 start.go:128] duration metric: createHost completed in 24.630581468s
	I1107 23:17:24.983761   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:24.985978   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.986268   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:24.986296   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:24.986411   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:24.986615   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:24.986791   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:24.986989   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:24.987158   25442 main.go:141] libmachine: Using SSH client type: native
	I1107 23:17:24.987490   25442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1107 23:17:24.987499   25442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:17:25.113761   25442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699399045.083036505
	
	I1107 23:17:25.113784   25442 fix.go:206] guest clock: 1699399045.083036505
	I1107 23:17:25.113794   25442 fix.go:219] Guest: 2023-11-07 23:17:25.083036505 +0000 UTC Remote: 2023-11-07 23:17:24.983749606 +0000 UTC m=+39.872491173 (delta=99.286899ms)
	I1107 23:17:25.113842   25442 fix.go:190] guest clock delta is within tolerance: 99.286899ms
	I1107 23:17:25.113849   25442 start.go:83] releasing machines lock for "ingress-addon-legacy-823610", held for 24.760772204s
	I1107 23:17:25.113872   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:17:25.114108   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetIP
	I1107 23:17:25.116428   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:25.116726   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:25.116754   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:25.116927   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:17:25.117440   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:17:25.117625   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:17:25.117693   25442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:17:25.117736   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:25.117856   25442 ssh_runner.go:195] Run: cat /version.json
	I1107 23:17:25.117888   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:17:25.120319   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:25.120393   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:25.120693   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:25.120725   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:25.120756   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:25.120777   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:25.120883   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:25.121074   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:17:25.121078   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:25.121266   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:17:25.121272   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:25.121460   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:17:25.121462   25442 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/id_rsa Username:docker}
	I1107 23:17:25.121589   25442 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/id_rsa Username:docker}
	I1107 23:17:25.230725   25442 ssh_runner.go:195] Run: systemctl --version
	I1107 23:17:25.236424   25442 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:17:25.392376   25442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1107 23:17:25.398835   25442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:17:25.398890   25442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:17:25.412729   25442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:17:25.412750   25442 start.go:472] detecting cgroup driver to use...
	I1107 23:17:25.412807   25442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:17:25.427495   25442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:17:25.440590   25442 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:17:25.440645   25442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:17:25.454420   25442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:17:25.467775   25442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:17:25.576351   25442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:17:25.695332   25442 docker.go:219] disabling docker service ...
	I1107 23:17:25.695415   25442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:17:25.709610   25442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:17:25.721513   25442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:17:25.834472   25442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:17:25.949667   25442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:17:25.962562   25442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:17:25.979117   25442 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1107 23:17:25.979169   25442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:17:25.988084   25442 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:17:25.988131   25442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:17:25.997191   25442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:17:26.006145   25442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:17:26.015701   25442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:17:26.025101   25442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:17:26.033421   25442 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1107 23:17:26.033473   25442 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1107 23:17:26.045320   25442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:17:26.053631   25442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:17:26.154491   25442 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:17:26.316897   25442 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:17:26.317020   25442 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:17:26.322647   25442 start.go:540] Will wait 60s for crictl version
	I1107 23:17:26.322705   25442 ssh_runner.go:195] Run: which crictl
	I1107 23:17:26.326598   25442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:17:26.359931   25442 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1107 23:17:26.360005   25442 ssh_runner.go:195] Run: crio --version
	I1107 23:17:26.407751   25442 ssh_runner.go:195] Run: crio --version
	I1107 23:17:26.451379   25442 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1107 23:17:26.452915   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetIP
	I1107 23:17:26.455321   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:26.455623   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:17:26.455648   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:17:26.455871   25442 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:17:26.459902   25442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:17:26.472257   25442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:17:26.472315   25442 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:17:26.507498   25442 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1107 23:17:26.507559   25442 ssh_runner.go:195] Run: which lz4
	I1107 23:17:26.511559   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1107 23:17:26.511647   25442 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 23:17:26.515817   25442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:17:26.515842   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1107 23:17:28.486574   25442 crio.go:444] Took 1.974942 seconds to copy over tarball
	I1107 23:17:28.486633   25442 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 23:17:31.544879   25442 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.058212819s)
	I1107 23:17:31.544927   25442 crio.go:451] Took 3.058322 seconds to extract the tarball
	I1107 23:17:31.544940   25442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 23:17:31.589287   25442 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:17:31.643044   25442 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1107 23:17:31.643068   25442 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 23:17:31.643111   25442 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:17:31.643132   25442 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:17:31.643176   25442 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:17:31.643180   25442 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:17:31.643339   25442 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:17:31.643414   25442 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1107 23:17:31.643444   25442 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1107 23:17:31.643586   25442 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:17:31.644296   25442 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:17:31.644343   25442 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1107 23:17:31.644361   25442 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:17:31.644365   25442 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:17:31.644383   25442 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:17:31.644403   25442 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:17:31.644405   25442 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1107 23:17:31.644774   25442 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:17:31.817651   25442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:17:31.837035   25442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1107 23:17:31.850680   25442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:17:31.857473   25442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:17:31.882807   25442 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1107 23:17:31.882862   25442 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:17:31.882920   25442 ssh_runner.go:195] Run: which crictl
	I1107 23:17:31.928666   25442 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1107 23:17:31.928700   25442 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1107 23:17:31.928735   25442 ssh_runner.go:195] Run: which crictl
	I1107 23:17:31.939328   25442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1107 23:17:31.942681   25442 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1107 23:17:31.942720   25442 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:17:31.942758   25442 ssh_runner.go:195] Run: which crictl
	I1107 23:17:31.942686   25442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1107 23:17:31.945149   25442 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1107 23:17:31.945185   25442 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:17:31.945206   25442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:17:31.945222   25442 ssh_runner.go:195] Run: which crictl
	I1107 23:17:31.945215   25442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1107 23:17:32.037832   25442 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1107 23:17:32.037867   25442 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1107 23:17:32.037908   25442 ssh_runner.go:195] Run: which crictl
	I1107 23:17:32.052037   25442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:17:32.052056   25442 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1107 23:17:32.052090   25442 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:17:32.052116   25442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:17:32.052130   25442 ssh_runner.go:195] Run: which crictl
	I1107 23:17:32.052132   25442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1107 23:17:32.052239   25442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1107 23:17:32.052282   25442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1107 23:17:32.056443   25442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1107 23:17:32.106039   25442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:17:32.153828   25442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1107 23:17:32.153828   25442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1107 23:17:32.153922   25442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1107 23:17:32.154598   25442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1107 23:17:32.183858   25442 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1107 23:17:32.183902   25442 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:17:32.183947   25442 ssh_runner.go:195] Run: which crictl
	I1107 23:17:32.187839   25442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:17:32.225473   25442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1107 23:17:32.575027   25442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:17:32.719261   25442 cache_images.go:92] LoadImages completed in 1.076178219s
	W1107 23:17:32.719338   25442 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1107 23:17:32.719399   25442 ssh_runner.go:195] Run: crio config
	I1107 23:17:32.777770   25442 cni.go:84] Creating CNI manager for ""
	I1107 23:17:32.777789   25442 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:17:32.777808   25442 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:17:32.777830   25442 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.221 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-823610 NodeName:ingress-addon-legacy-823610 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1107 23:17:32.777968   25442 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-823610"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:17:32.778047   25442 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-823610 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-823610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:17:32.778099   25442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1107 23:17:32.787782   25442 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:17:32.787841   25442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:17:32.796936   25442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I1107 23:17:32.813174   25442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1107 23:17:32.828315   25442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I1107 23:17:32.844501   25442 ssh_runner.go:195] Run: grep 192.168.39.221	control-plane.minikube.internal$ /etc/hosts
	I1107 23:17:32.848258   25442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:17:32.859422   25442 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610 for IP: 192.168.39.221
	I1107 23:17:32.859465   25442 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:17:32.859609   25442 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1107 23:17:32.859669   25442 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1107 23:17:32.859730   25442 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.key
	I1107 23:17:32.859745   25442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt with IP's: []
	I1107 23:17:32.939677   25442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt ...
	I1107 23:17:32.939707   25442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: {Name:mk6ea620f5d11f9939722edb8f3f5b10a0f4ed7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:17:32.939871   25442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.key ...
	I1107 23:17:32.939884   25442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.key: {Name:mk3d6b241f044f895d920c22c19186127587bb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:17:32.939963   25442 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.key.52bad639
	I1107 23:17:32.939979   25442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.crt.52bad639 with IP's: [192.168.39.221 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:17:33.163649   25442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.crt.52bad639 ...
	I1107 23:17:33.163676   25442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.crt.52bad639: {Name:mkbbf64ab2c836bdfe2955cd407275c8dda6d62d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:17:33.163833   25442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.key.52bad639 ...
	I1107 23:17:33.163846   25442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.key.52bad639: {Name:mk5a0b6a3b15a4cee7137e51a8bf210afe9e9431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:17:33.163913   25442 certs.go:337] copying /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.crt.52bad639 -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.crt
	I1107 23:17:33.163983   25442 certs.go:341] copying /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.key.52bad639 -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.key
	I1107 23:17:33.164031   25442 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.key
	I1107 23:17:33.164044   25442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.crt with IP's: []
	I1107 23:17:33.247617   25442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.crt ...
	I1107 23:17:33.247649   25442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.crt: {Name:mk7fb7cdc956a64010f97aa1ca6013f2d7481cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:17:33.247800   25442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.key ...
	I1107 23:17:33.247814   25442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.key: {Name:mk27d5d6c194d30730537f9caaf598b8cf44ef13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:17:33.247890   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 23:17:33.247907   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 23:17:33.247917   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 23:17:33.247929   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 23:17:33.247941   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:17:33.247952   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:17:33.247962   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:17:33.247972   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:17:33.248022   25442 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1107 23:17:33.248057   25442 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1107 23:17:33.248068   25442 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:17:33.248088   25442 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:17:33.248115   25442 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:17:33.248138   25442 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1107 23:17:33.248216   25442 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:17:33.248249   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /usr/share/ca-certificates/168482.pem
	I1107 23:17:33.248264   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:17:33.248276   25442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem -> /usr/share/ca-certificates/16848.pem
	I1107 23:17:33.248871   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:17:33.272491   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 23:17:33.294657   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:17:33.315959   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 23:17:33.337909   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:17:33.359764   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:17:33.381869   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:17:33.404260   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 23:17:33.426061   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1107 23:17:33.447577   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:17:33.468739   25442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1107 23:17:33.490273   25442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:17:33.506977   25442 ssh_runner.go:195] Run: openssl version
	I1107 23:17:33.512745   25442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:17:33.523923   25442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:17:33.528671   25442 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:17:33.528736   25442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:17:33.534305   25442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:17:33.545348   25442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1107 23:17:33.556396   25442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1107 23:17:33.561066   25442 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:17:33.561120   25442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1107 23:17:33.566548   25442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1107 23:17:33.577040   25442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1107 23:17:33.587315   25442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1107 23:17:33.591521   25442 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:17:33.591569   25442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1107 23:17:33.596904   25442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:17:33.606923   25442 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:17:33.610863   25442 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:17:33.610911   25442 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-823610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-
823610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:17:33.610982   25442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:17:33.611013   25442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:17:33.657060   25442 cri.go:89] found id: ""
	I1107 23:17:33.657146   25442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:17:33.667043   25442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:17:33.676269   25442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:17:33.685402   25442 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:17:33.685447   25442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1107 23:17:33.741727   25442 kubeadm.go:322] W1107 23:17:33.722934     962 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1107 23:17:33.881310   25442 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:17:36.954954   25442 kubeadm.go:322] W1107 23:17:36.939426     962 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 23:17:36.956444   25442 kubeadm.go:322] W1107 23:17:36.940880     962 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 23:17:46.949769   25442 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1107 23:17:46.949841   25442 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:17:46.949963   25442 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:17:46.950123   25442 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:17:46.950221   25442 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:17:46.950358   25442 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:17:46.950437   25442 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:17:46.950474   25442 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:17:46.950549   25442 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:17:46.951862   25442 out.go:204]   - Generating certificates and keys ...
	I1107 23:17:46.951948   25442 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:17:46.952025   25442 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:17:46.952119   25442 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:17:46.952206   25442 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:17:46.952291   25442 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:17:46.952362   25442 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:17:46.952444   25442 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:17:46.952585   25442 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-823610 localhost] and IPs [192.168.39.221 127.0.0.1 ::1]
	I1107 23:17:46.952676   25442 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:17:46.952860   25442 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-823610 localhost] and IPs [192.168.39.221 127.0.0.1 ::1]
	I1107 23:17:46.952952   25442 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:17:46.953042   25442 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:17:46.953104   25442 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:17:46.953194   25442 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:17:46.953277   25442 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:17:46.953360   25442 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:17:46.953455   25442 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:17:46.953534   25442 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:17:46.953634   25442 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:17:46.955131   25442 out.go:204]   - Booting up control plane ...
	I1107 23:17:46.955236   25442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:17:46.955327   25442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:17:46.955434   25442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:17:46.955531   25442 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:17:46.955720   25442 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:17:46.955818   25442 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503326 seconds
	I1107 23:17:46.955951   25442 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:17:46.956131   25442 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:17:46.956183   25442 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:17:46.956312   25442 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-823610 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1107 23:17:46.956399   25442 kubeadm.go:322] [bootstrap-token] Using token: xbec67.datwgnaabfcp6cw5
	I1107 23:17:46.958121   25442 out.go:204]   - Configuring RBAC rules ...
	I1107 23:17:46.958249   25442 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:17:46.958344   25442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:17:46.958471   25442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:17:46.958691   25442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:17:46.958818   25442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:17:46.958920   25442 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:17:46.959055   25442 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:17:46.959118   25442 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:17:46.959189   25442 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:17:46.959198   25442 kubeadm.go:322] 
	I1107 23:17:46.959277   25442 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:17:46.959286   25442 kubeadm.go:322] 
	I1107 23:17:46.959374   25442 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:17:46.959383   25442 kubeadm.go:322] 
	I1107 23:17:46.959411   25442 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:17:46.959461   25442 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:17:46.959502   25442 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:17:46.959514   25442 kubeadm.go:322] 
	I1107 23:17:46.959556   25442 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:17:46.959637   25442 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:17:46.959698   25442 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:17:46.959704   25442 kubeadm.go:322] 
	I1107 23:17:46.959769   25442 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:17:46.959850   25442 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:17:46.959872   25442 kubeadm.go:322] 
	I1107 23:17:46.959984   25442 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xbec67.datwgnaabfcp6cw5 \
	I1107 23:17:46.960111   25442 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1107 23:17:46.960151   25442 kubeadm.go:322]     --control-plane 
	I1107 23:17:46.960166   25442 kubeadm.go:322] 
	I1107 23:17:46.960270   25442 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:17:46.960278   25442 kubeadm.go:322] 
	I1107 23:17:46.960376   25442 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xbec67.datwgnaabfcp6cw5 \
	I1107 23:17:46.960482   25442 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1107 23:17:46.960504   25442 cni.go:84] Creating CNI manager for ""
	I1107 23:17:46.960522   25442 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:17:46.962872   25442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1107 23:17:46.964237   25442 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1107 23:17:46.973284   25442 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1107 23:17:46.990896   25442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:17:46.991001   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:46.991016   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=ingress-addon-legacy-823610 minikube.k8s.io/updated_at=2023_11_07T23_17_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:47.021924   25442 ops.go:34] apiserver oom_adj: -16
	I1107 23:17:47.179234   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:47.390204   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:48.026088   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:48.526751   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:49.026608   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:49.525812   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:50.026826   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:50.526250   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:51.026531   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:51.526223   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:52.026757   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:52.526046   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:53.026536   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:53.526569   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:54.026525   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:54.525761   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:55.026484   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:55.526269   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:56.025762   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:56.526088   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:57.025945   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:57.526369   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:58.025813   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:58.526512   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:59.026647   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:17:59.526714   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:18:00.026140   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:18:00.526198   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:18:01.025917   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:18:01.526210   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:18:02.026802   25442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:18:02.237843   25442 kubeadm.go:1081] duration metric: took 15.246911097s to wait for elevateKubeSystemPrivileges.
	I1107 23:18:02.237891   25442 kubeadm.go:406] StartCluster complete in 28.626976479s
	I1107 23:18:02.237912   25442 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:18:02.238017   25442 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:18:02.238725   25442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:18:02.310019   25442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:18:02.310415   25442 config.go:182] Loaded profile config "ingress-addon-legacy-823610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1107 23:18:02.310547   25442 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:18:02.310643   25442 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-823610"
	I1107 23:18:02.310664   25442 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-823610"
	I1107 23:18:02.310682   25442 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-823610"
	I1107 23:18:02.310689   25442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-823610"
	I1107 23:18:02.310748   25442 host.go:66] Checking if "ingress-addon-legacy-823610" exists ...
	I1107 23:18:02.310842   25442 kapi.go:59] client config for ingress-addon-legacy-823610: &rest.Config{Host:"https://192.168.39.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:18:02.311187   25442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:18:02.311194   25442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:18:02.311217   25442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:18:02.311219   25442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:18:02.311627   25442 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 23:18:02.326719   25442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I1107 23:18:02.327125   25442 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:18:02.327620   25442 main.go:141] libmachine: Using API Version  1
	I1107 23:18:02.327642   25442 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:18:02.328046   25442 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:18:02.328640   25442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:18:02.328692   25442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:18:02.329469   25442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I1107 23:18:02.329824   25442 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:18:02.330232   25442 main.go:141] libmachine: Using API Version  1
	I1107 23:18:02.330256   25442 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:18:02.330590   25442 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:18:02.330778   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetState
	I1107 23:18:02.332885   25442 kapi.go:59] client config for ingress-addon-legacy-823610: &rest.Config{Host:"https://192.168.39.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:18:02.333130   25442 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-823610"
	I1107 23:18:02.333160   25442 host.go:66] Checking if "ingress-addon-legacy-823610" exists ...
	I1107 23:18:02.333425   25442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:18:02.333450   25442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:18:02.347027   25442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I1107 23:18:02.347546   25442 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:18:02.347625   25442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I1107 23:18:02.348086   25442 main.go:141] libmachine: Using API Version  1
	I1107 23:18:02.348108   25442 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:18:02.348131   25442 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:18:02.348449   25442 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:18:02.348596   25442 main.go:141] libmachine: Using API Version  1
	I1107 23:18:02.348626   25442 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:18:02.348968   25442 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:18:02.349011   25442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:18:02.349055   25442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:18:02.349139   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetState
	I1107 23:18:02.350856   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:18:02.366714   25442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I1107 23:18:02.409179   25442 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:18:02.410911   25442 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:18:02.409827   25442 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:18:02.410965   25442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:18:02.410997   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:18:02.411474   25442 main.go:141] libmachine: Using API Version  1
	I1107 23:18:02.411497   25442 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:18:02.411872   25442 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:18:02.412079   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetState
	I1107 23:18:02.413794   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .DriverName
	I1107 23:18:02.414039   25442 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:18:02.414055   25442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:18:02.414072   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHHostname
	I1107 23:18:02.414800   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:18:02.415237   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:18:02.415271   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:18:02.415351   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:18:02.415538   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:18:02.415754   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:18:02.415985   25442 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/id_rsa Username:docker}
	I1107 23:18:02.417138   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:18:02.417503   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:2d:0c", ip: ""} in network mk-ingress-addon-legacy-823610: {Iface:virbr1 ExpiryTime:2023-11-08 00:17:16 +0000 UTC Type:0 Mac:52:54:00:4c:2d:0c Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:ingress-addon-legacy-823610 Clientid:01:52:54:00:4c:2d:0c}
	I1107 23:18:02.417529   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | domain ingress-addon-legacy-823610 has defined IP address 192.168.39.221 and MAC address 52:54:00:4c:2d:0c in network mk-ingress-addon-legacy-823610
	I1107 23:18:02.417671   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHPort
	I1107 23:18:02.417826   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHKeyPath
	I1107 23:18:02.417943   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .GetSSHUsername
	I1107 23:18:02.418049   25442 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/ingress-addon-legacy-823610/id_rsa Username:docker}
	I1107 23:18:02.502808   25442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-823610" context rescaled to 1 replicas
	I1107 23:18:02.502855   25442 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:18:02.504929   25442 out.go:177] * Verifying Kubernetes components...
	I1107 23:18:02.506467   25442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:18:02.545212   25442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:18:02.570948   25442 kapi.go:59] client config for ingress-addon-legacy-823610: &rest.Config{Host:"https://192.168.39.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:18:02.571204   25442 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-823610" to be "Ready" ...
	I1107 23:18:02.613925   25442 node_ready.go:49] node "ingress-addon-legacy-823610" has status "Ready":"True"
	I1107 23:18:02.613950   25442 node_ready.go:38] duration metric: took 42.725483ms waiting for node "ingress-addon-legacy-823610" to be "Ready" ...
	I1107 23:18:02.613960   25442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:18:02.764690   25442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:18:02.769859   25442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:18:02.937049   25442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-9grkq" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:03.508494   25442 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1107 23:18:03.608039   25442 main.go:141] libmachine: Making call to close driver server
	I1107 23:18:03.608063   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .Close
	I1107 23:18:03.608334   25442 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:18:03.608353   25442 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:18:03.608362   25442 main.go:141] libmachine: Making call to close driver server
	I1107 23:18:03.608374   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Closing plugin on server side
	I1107 23:18:03.608378   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .Close
	I1107 23:18:03.608677   25442 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:18:03.608691   25442 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:18:03.608691   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Closing plugin on server side
	I1107 23:18:03.630856   25442 main.go:141] libmachine: Making call to close driver server
	I1107 23:18:03.630882   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .Close
	I1107 23:18:03.631138   25442 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:18:03.631159   25442 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:18:03.631159   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Closing plugin on server side
	I1107 23:18:03.676782   25442 main.go:141] libmachine: Making call to close driver server
	I1107 23:18:03.676803   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .Close
	I1107 23:18:03.677104   25442 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:18:03.677124   25442 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:18:03.677173   25442 main.go:141] libmachine: Making call to close driver server
	I1107 23:18:03.677191   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) Calling .Close
	I1107 23:18:03.677439   25442 main.go:141] libmachine: (ingress-addon-legacy-823610) DBG | Closing plugin on server side
	I1107 23:18:03.677472   25442 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:18:03.677485   25442 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:18:03.679344   25442 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1107 23:18:03.681334   25442 addons.go:502] enable addons completed in 1.370785614s: enabled=[default-storageclass storage-provisioner]
	I1107 23:18:05.384133   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:07.882209   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:09.883170   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:12.382288   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:14.383544   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:16.883400   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:19.383380   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:21.883078   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:24.382527   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:26.383553   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:28.882736   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:31.383315   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:33.884180   25442 pod_ready.go:102] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"False"
	I1107 23:18:35.383951   25442 pod_ready.go:92] pod "coredns-66bff467f8-9grkq" in "kube-system" namespace has status "Ready":"True"
	I1107 23:18:35.383972   25442 pod_ready.go:81] duration metric: took 32.44689055s waiting for pod "coredns-66bff467f8-9grkq" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.383980   25442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-tp5sr" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.387681   25442 pod_ready.go:97] error getting pod "coredns-66bff467f8-tp5sr" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-tp5sr" not found
	I1107 23:18:35.387705   25442 pod_ready.go:81] duration metric: took 3.717804ms waiting for pod "coredns-66bff467f8-tp5sr" in "kube-system" namespace to be "Ready" ...
	E1107 23:18:35.387716   25442 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-tp5sr" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-tp5sr" not found
	I1107 23:18:35.387723   25442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-823610" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.393367   25442 pod_ready.go:92] pod "etcd-ingress-addon-legacy-823610" in "kube-system" namespace has status "Ready":"True"
	I1107 23:18:35.393388   25442 pod_ready.go:81] duration metric: took 5.656634ms waiting for pod "etcd-ingress-addon-legacy-823610" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.393399   25442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-823610" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.400738   25442 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-823610" in "kube-system" namespace has status "Ready":"True"
	I1107 23:18:35.400757   25442 pod_ready.go:81] duration metric: took 7.350445ms waiting for pod "kube-apiserver-ingress-addon-legacy-823610" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.400765   25442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-823610" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.411310   25442 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-823610" in "kube-system" namespace has status "Ready":"True"
	I1107 23:18:35.411332   25442 pod_ready.go:81] duration metric: took 10.560668ms waiting for pod "kube-controller-manager-ingress-addon-legacy-823610" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.411341   25442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zxn8s" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.576670   25442 request.go:629] Waited for 161.236615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.221:8443/api/v1/nodes/ingress-addon-legacy-823610
	I1107 23:18:35.579917   25442 pod_ready.go:92] pod "kube-proxy-zxn8s" in "kube-system" namespace has status "Ready":"True"
	I1107 23:18:35.579935   25442 pod_ready.go:81] duration metric: took 168.588261ms waiting for pod "kube-proxy-zxn8s" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.579943   25442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-823610" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.777373   25442 request.go:629] Waited for 197.378963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.221:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-823610
	I1107 23:18:35.977482   25442 request.go:629] Waited for 196.358253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.221:8443/api/v1/nodes/ingress-addon-legacy-823610
	I1107 23:18:35.980725   25442 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-823610" in "kube-system" namespace has status "Ready":"True"
	I1107 23:18:35.980747   25442 pod_ready.go:81] duration metric: took 400.797328ms waiting for pod "kube-scheduler-ingress-addon-legacy-823610" in "kube-system" namespace to be "Ready" ...
	I1107 23:18:35.980758   25442 pod_ready.go:38] duration metric: took 33.366786775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:18:35.980775   25442 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:18:35.980845   25442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:18:35.995200   25442 api_server.go:72] duration metric: took 33.492314376s to wait for apiserver process to appear ...
	I1107 23:18:35.995225   25442 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:18:35.995243   25442 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I1107 23:18:36.000716   25442 api_server.go:279] https://192.168.39.221:8443/healthz returned 200:
	ok
	I1107 23:18:36.001790   25442 api_server.go:141] control plane version: v1.18.20
	I1107 23:18:36.001811   25442 api_server.go:131] duration metric: took 6.580421ms to wait for apiserver health ...
	I1107 23:18:36.001818   25442 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:18:36.177228   25442 request.go:629] Waited for 175.340721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.221:8443/api/v1/namespaces/kube-system/pods
	I1107 23:18:36.182951   25442 system_pods.go:59] 7 kube-system pods found
	I1107 23:18:36.182975   25442 system_pods.go:61] "coredns-66bff467f8-9grkq" [8e1b17c9-2f8e-4b88-a477-b02f96bf579b] Running
	I1107 23:18:36.182980   25442 system_pods.go:61] "etcd-ingress-addon-legacy-823610" [f66417c6-fb22-4ff7-828b-abf76431862f] Running
	I1107 23:18:36.182987   25442 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-823610" [28e7a093-f8ca-4f07-b957-cb31343de519] Running
	I1107 23:18:36.182992   25442 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-823610" [eb4e2382-9f1a-4a18-880a-d438f268936a] Running
	I1107 23:18:36.182996   25442 system_pods.go:61] "kube-proxy-zxn8s" [9b50630e-210e-43a7-b412-7945186015fd] Running
	I1107 23:18:36.183003   25442 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-823610" [dc432270-b51e-48c5-8208-eec11ed56032] Running
	I1107 23:18:36.183010   25442 system_pods.go:61] "storage-provisioner" [6ddb18aa-4fe1-4d13-a4f3-48184e8be2b0] Running
	I1107 23:18:36.183022   25442 system_pods.go:74] duration metric: took 181.197078ms to wait for pod list to return data ...
	I1107 23:18:36.183036   25442 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:18:36.377455   25442 request.go:629] Waited for 194.361039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.221:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:18:36.380714   25442 default_sa.go:45] found service account: "default"
	I1107 23:18:36.380737   25442 default_sa.go:55] duration metric: took 197.691891ms for default service account to be created ...
	I1107 23:18:36.380745   25442 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:18:36.577202   25442 request.go:629] Waited for 196.392117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.221:8443/api/v1/namespaces/kube-system/pods
	I1107 23:18:36.583206   25442 system_pods.go:86] 7 kube-system pods found
	I1107 23:18:36.583228   25442 system_pods.go:89] "coredns-66bff467f8-9grkq" [8e1b17c9-2f8e-4b88-a477-b02f96bf579b] Running
	I1107 23:18:36.583233   25442 system_pods.go:89] "etcd-ingress-addon-legacy-823610" [f66417c6-fb22-4ff7-828b-abf76431862f] Running
	I1107 23:18:36.583237   25442 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-823610" [28e7a093-f8ca-4f07-b957-cb31343de519] Running
	I1107 23:18:36.583241   25442 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-823610" [eb4e2382-9f1a-4a18-880a-d438f268936a] Running
	I1107 23:18:36.583245   25442 system_pods.go:89] "kube-proxy-zxn8s" [9b50630e-210e-43a7-b412-7945186015fd] Running
	I1107 23:18:36.583248   25442 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-823610" [dc432270-b51e-48c5-8208-eec11ed56032] Running
	I1107 23:18:36.583252   25442 system_pods.go:89] "storage-provisioner" [6ddb18aa-4fe1-4d13-a4f3-48184e8be2b0] Running
	I1107 23:18:36.583258   25442 system_pods.go:126] duration metric: took 202.509077ms to wait for k8s-apps to be running ...
	I1107 23:18:36.583264   25442 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:18:36.583303   25442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:18:36.597086   25442 system_svc.go:56] duration metric: took 13.81367ms WaitForService to wait for kubelet.
	I1107 23:18:36.597110   25442 kubeadm.go:581] duration metric: took 34.094231946s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:18:36.597127   25442 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:18:36.777541   25442 request.go:629] Waited for 180.341912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.221:8443/api/v1/nodes
	I1107 23:18:36.781196   25442 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:18:36.781220   25442 node_conditions.go:123] node cpu capacity is 2
	I1107 23:18:36.781230   25442 node_conditions.go:105] duration metric: took 184.098755ms to run NodePressure ...
	I1107 23:18:36.781240   25442 start.go:228] waiting for startup goroutines ...
	I1107 23:18:36.781253   25442 start.go:233] waiting for cluster config update ...
	I1107 23:18:36.781267   25442 start.go:242] writing updated cluster config ...
	I1107 23:18:36.781540   25442 ssh_runner.go:195] Run: rm -f paused
	I1107 23:18:36.826874   25442 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1107 23:18:36.829011   25442 out.go:177] 
	W1107 23:18:36.830477   25442 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1107 23:18:36.831917   25442 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1107 23:18:36.833289   25442 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-823610" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-07 23:17:12 UTC, ends at Tue 2023-11-07 23:21:45 UTC. --
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.621254411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0ef49629-6069-4c8e-8bc3-1d62ddd11bee name=/runtime.v1.RuntimeService/Version
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.622334991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8abd2b31-dc02-495d-9b75-6b20e2e8b661 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.622774097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699399305622762247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=8abd2b31-dc02-495d-9b75-6b20e2e8b661 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.623411166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b52e6a1-992b-4fda-bf9c-c6ef77b3757e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.623456375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b52e6a1-992b-4fda-bf9c-c6ef77b3757e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.623701310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5366090d1dee4ad53fdaf1d9cb115b4ec2ca8675a6bfa5f99318897ca0fd885,PodSandboxId:a935068f996ae5d2ebcd66fc05a224aa013f6842a9700b7a65df2453cb7a723a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699399296121407698,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-9tc2x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 064592de-ff07-4e29-bd27-962259a8d36d,},Annotations:map[string]string{io.kubernetes.container.hash: 3bfa8ca9,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdddb20dfcbe0b8d5f995b0796d57eecb95367aa565cb14ebc7c0f05f8c30c79,PodSandboxId:90c15798ead33ee9ece6d032c6341a83389fe007b55cf8b317aed12eb2959dff,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699399154882648807,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0ab4206-2f23-414e-916b-c9f8899844cb,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1f390a5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfac0311b15cc4c8c3b78bb8be463bef264740a5c857e3b714fcec3073409124,PodSandboxId:cb07ad35771d5dc5805f44788e270794c399187ba21b8d0b052cfb744e033153,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1699399132621927428,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-sfpgt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0330f2c0-c5ac-4680-b02b-bdfb376cc2c8,},Annotations:map[string]string{io.kubernetes.container.hash: 86a10c80,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8193e63895516ccf192f59930fe8abf50e55195072dd2c6ae20e03d7336380c5,PodSandboxId:cff73313fd2e2809a3c851a2e0a68567e30487ecd53ecbf4cfa00a786ccc4cbc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699399125043960493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4bl4d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8c60fc0-347b-45a4-a657-fa58ed2ae5ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a675a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6fa255c1a01de4fe4eb55284f3a45535fe4f28659ece197beff0470092714bb,PodSandboxId:8d81f40347becba445c0a315ed4942889f4c66e1d8a82074820948ad0c756ed0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699399123860625990,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d5rl4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3748c907-32fd-4e85-b566-1bd4022ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 91ac2cba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a57fd0f1bfe7e758709fb659476f689a7763e8b714b01b42daf9b33195e62,PodSandboxId:73d9d3dca78d99a9ac58f215bcc55765111f96cfdd68cd379d560afe2638fda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699399085818056670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ddb18aa-4fe1-4d13-a4f3-48184e8be2b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51525231,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed61d3a4e9eaf20f51d5015c64ba402eb141e2da20fbf11ddf76a1ee2fa00ce4,PodSandboxId:8b9ea6113e1c5664be3360c026a027fe58c7dd8a0cd70afa36e1fbc3d07f5424,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1699399083065649833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-9grkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e1b17c9-2f8e-4b88-a477-b02f96bf579b,},Annotations:map[string]string{io.kubernetes.container.hash: 97394a5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18dcacbd3ccfbce20bc633cc0aed
be895673a893477600323e0ad8d4b8e17288,PodSandboxId:c1bea85b53c77a7d4b6ce57c2c8a8237d1f188ea590a61f3f727a59cb9f2c0a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1699399082616082073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zxn8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b50630e-210e-43a7-b412-7945186015fd,},Annotations:map[string]string{io.kubernetes.container.hash: 844e15ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2202b84129cc7cd3a006ec448264d8970473d61675296ab76c2c3835b1f1e49,Pod
SandboxId:3c807966727776e1d4617f5cfc1b2fa5e7e1c89fe73b7a1cdbeaf8ea35e8c8e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1699399060425673189,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cdb7ef90168fa2d42b0ffce30b41e2,},Annotations:map[string]string{io.kubernetes.container.hash: e97517f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bbfe19dc7836b4780954b1507db87ce25b04df729898aad0cfc66e67385903,PodSandboxId:7606ec21e72161a5384f17551840f434063b
5dd675f2abdbc1f5fa974e3e4043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1699399059083880464,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ae76ae0dac6775aa1cbee9cb6c908d69150a7d58f4f83fb5b2ba2abadf23fd,PodSandboxId:16195eb063ff9f7d72de6b16e9644069a1c79b6469
dae38e2c057e47d5b764cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1699399058933666616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbde29768dd7e6005684efa343ab63b3117529dbb323c1e56d2b0f27078d7ce,PodSandboxId:7041ca0adf0f
f7d9cca38b371ad5bbf982ebf58eb55ea4c172d03ab4e0787ed3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1699399058886129510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbf86c2f685c2f403850faed8ee51a6a,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9439cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b52e6a1-992b-4fda-bf9c-c6ef77b3757e name=/runtime.v1.RuntimeSer
vice/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.660904922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fec2e5d1-a669-4253-9354-7cb5069e721e name=/runtime.v1.RuntimeService/Version
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.660958851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fec2e5d1-a669-4253-9354-7cb5069e721e name=/runtime.v1.RuntimeService/Version
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.662478590Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8618b0e3-31b3-489c-a65b-e80c32760438 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.662934009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699399305662920682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=8618b0e3-31b3-489c-a65b-e80c32760438 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.663651172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3b087bd2-0e1b-4853-bfa3-5d6184a2f2ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.663695603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3b087bd2-0e1b-4853-bfa3-5d6184a2f2ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.663977133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5366090d1dee4ad53fdaf1d9cb115b4ec2ca8675a6bfa5f99318897ca0fd885,PodSandboxId:a935068f996ae5d2ebcd66fc05a224aa013f6842a9700b7a65df2453cb7a723a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699399296121407698,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-9tc2x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 064592de-ff07-4e29-bd27-962259a8d36d,},Annotations:map[string]string{io.kubernetes.container.hash: 3bfa8ca9,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdddb20dfcbe0b8d5f995b0796d57eecb95367aa565cb14ebc7c0f05f8c30c79,PodSandboxId:90c15798ead33ee9ece6d032c6341a83389fe007b55cf8b317aed12eb2959dff,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699399154882648807,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0ab4206-2f23-414e-916b-c9f8899844cb,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1f390a5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfac0311b15cc4c8c3b78bb8be463bef264740a5c857e3b714fcec3073409124,PodSandboxId:cb07ad35771d5dc5805f44788e270794c399187ba21b8d0b052cfb744e033153,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1699399132621927428,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-sfpgt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0330f2c0-c5ac-4680-b02b-bdfb376cc2c8,},Annotations:map[string]string{io.kubernetes.container.hash: 86a10c80,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8193e63895516ccf192f59930fe8abf50e55195072dd2c6ae20e03d7336380c5,PodSandboxId:cff73313fd2e2809a3c851a2e0a68567e30487ecd53ecbf4cfa00a786ccc4cbc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699399125043960493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4bl4d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8c60fc0-347b-45a4-a657-fa58ed2ae5ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a675a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6fa255c1a01de4fe4eb55284f3a45535fe4f28659ece197beff0470092714bb,PodSandboxId:8d81f40347becba445c0a315ed4942889f4c66e1d8a82074820948ad0c756ed0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699399123860625990,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d5rl4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3748c907-32fd-4e85-b566-1bd4022ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 91ac2cba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a57fd0f1bfe7e758709fb659476f689a7763e8b714b01b42daf9b33195e62,PodSandboxId:73d9d3dca78d99a9ac58f215bcc55765111f96cfdd68cd379d560afe2638fda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699399085818056670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ddb18aa-4fe1-4d13-a4f3-48184e8be2b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51525231,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed61d3a4e9eaf20f51d5015c64ba402eb141e2da20fbf11ddf76a1ee2fa00ce4,PodSandboxId:8b9ea6113e1c5664be3360c026a027fe58c7dd8a0cd70afa36e1fbc3d07f5424,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1699399083065649833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-9grkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e1b17c9-2f8e-4b88-a477-b02f96bf579b,},Annotations:map[string]string{io.kubernetes.container.hash: 97394a5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18dcacbd3ccfbce20bc633cc0aed
be895673a893477600323e0ad8d4b8e17288,PodSandboxId:c1bea85b53c77a7d4b6ce57c2c8a8237d1f188ea590a61f3f727a59cb9f2c0a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1699399082616082073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zxn8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b50630e-210e-43a7-b412-7945186015fd,},Annotations:map[string]string{io.kubernetes.container.hash: 844e15ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2202b84129cc7cd3a006ec448264d8970473d61675296ab76c2c3835b1f1e49,Pod
SandboxId:3c807966727776e1d4617f5cfc1b2fa5e7e1c89fe73b7a1cdbeaf8ea35e8c8e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1699399060425673189,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cdb7ef90168fa2d42b0ffce30b41e2,},Annotations:map[string]string{io.kubernetes.container.hash: e97517f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bbfe19dc7836b4780954b1507db87ce25b04df729898aad0cfc66e67385903,PodSandboxId:7606ec21e72161a5384f17551840f434063b
5dd675f2abdbc1f5fa974e3e4043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1699399059083880464,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ae76ae0dac6775aa1cbee9cb6c908d69150a7d58f4f83fb5b2ba2abadf23fd,PodSandboxId:16195eb063ff9f7d72de6b16e9644069a1c79b6469
dae38e2c057e47d5b764cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1699399058933666616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbde29768dd7e6005684efa343ab63b3117529dbb323c1e56d2b0f27078d7ce,PodSandboxId:7041ca0adf0f
f7d9cca38b371ad5bbf982ebf58eb55ea4c172d03ab4e0787ed3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1699399058886129510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbf86c2f685c2f403850faed8ee51a6a,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9439cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3b087bd2-0e1b-4853-bfa3-5d6184a2f2ac name=/runtime.v1.RuntimeSer
vice/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.697248497Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a4a27429-b0aa-49f8-848c-96a53a51ccdf name=/runtime.v1.RuntimeService/Version
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.697388368Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a4a27429-b0aa-49f8-848c-96a53a51ccdf name=/runtime.v1.RuntimeService/Version
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.698709947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6f9fcb7f-c27a-4f71-aabb-e485cd6dee5d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.699694993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699399305699631138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=6f9fcb7f-c27a-4f71-aabb-e485cd6dee5d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.706056081Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=91ba9904-63cf-4dea-9fde-2457661c130f name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.706545448Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a935068f996ae5d2ebcd66fc05a224aa013f6842a9700b7a65df2453cb7a723a,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-9tc2x,Uid:064592de-ff07-4e29-bd27-962259a8d36d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699399292082653851,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-9tc2x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 064592de-ff07-4e29-bd27-962259a8d36d,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:21:31.736478650Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:90c15798ead33ee9ece6d032c6341a83389fe007b55cf8b317aed12eb2959dff,Metadata:&PodSandboxMetadata{Name:nginx,Uid:d0ab4206-2f23-414e-916b-c9f8899844cb,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699399149843801702,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0ab4206-2f23-414e-916b-c9f8899844cb,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:19:09.503538561Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a90e73ca68279d66c141e14e57fc25a1770db2780c710862be04c182685b7b52,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:38c228df-acc5-4dda-9673-c59e15ca8e92,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1699399134225537690,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38c228df-acc5-4dda-9673-c59e15ca8e92,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configura
tion: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-11-07T23:18:53.859743382Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cb07ad35771d5dc5805f44788e270794c399187ba21b8d0b052cfb744e033153,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-sfpgt,Uid:0330f2c0-c5ac-4680-b02b
-bdfb376cc2c8,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1699399125499064422,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-sfpgt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0330f2c0-c5ac-4680-b02b-bdfb376cc2c8,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:18:37.664837920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cff73313fd2e2809a3c851a2e0a68567e30487ecd53ecbf4cfa00a786ccc4cbc,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-4bl4d,Uid:e8c60fc0-347b-45a4-a657-fa58ed2ae5ca,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1699399119570947010,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/ins
tance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 0c207f68-2961-442f-9d7a-36910b9e8b23,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-4bl4d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8c60fc0-347b-45a4-a657-fa58ed2ae5ca,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:18:37.728107108Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8d81f40347becba445c0a315ed4942889f4c66e1d8a82074820948ad0c756ed0,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-d5rl4,Uid:3748c907-32fd-4e85-b566-1bd4022ec596,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1699399119537148270,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 3f654330-482d-4ad7-aa55-9dd72371cf99,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: ingress-nginx-admission-create-d5rl4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3748c907-32fd-4e85-b566-1bd4022ec596,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:18:37.697974933Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73d9d3dca78d99a9ac58f215bcc55765111f96cfdd68cd379d560afe2638fda8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6ddb18aa-4fe1-4d13-a4f3-48184e8be2b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699399085518706850,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ddb18aa-4fe1-4d13-a4f3-48184e8be2b0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annota
tions\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-07T23:18:03.679065611Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b9ea6113e1c5664be3360c026a027fe58c7dd8a0cd70afa36e1fbc3d07f5424,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-9grkq,Uid:8e1b17c9-2f8e-4b88-a477-b02f96bf579b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699399082450091737,Labels:map[string]string{io.kubernetes.container.
name: POD,io.kubernetes.pod.name: coredns-66bff467f8-9grkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e1b17c9-2f8e-4b88-a477-b02f96bf579b,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:18:01.919021348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c1bea85b53c77a7d4b6ce57c2c8a8237d1f188ea590a61f3f727a59cb9f2c0a6,Metadata:&PodSandboxMetadata{Name:kube-proxy-zxn8s,Uid:9b50630e-210e-43a7-b412-7945186015fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699399082031645843,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zxn8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b50630e-210e-43a7-b412-7945186015fd,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-07T23:18:01.687583875Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:7041ca0adf0ff7d9cca38b371ad5bbf982ebf58eb55ea4c172d03ab4e0787ed3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-legacy-823610,Uid:bbf86c2f685c2f403850faed8ee51a6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699399058442115605,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbf86c2f685c2f403850faed8ee51a6a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.221:8443,kubernetes.io/config.hash: bbf86c2f685c2f403850faed8ee51a6a,kubernetes.io/config.seen: 2023-11-07T23:17:37.199974855Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7606ec21e72161a5384f17551840f434063b5dd675f2abdbc1f5fa974e3e4043,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-823610,Uid:d12e497b0008e22a
cbcd5a9cf2dd48ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699399058419916607,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernetes.io/config.seen: 2023-11-07T23:17:37.199977481Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:16195eb063ff9f7d72de6b16e9644069a1c79b6469dae38e2c057e47d5b764cf,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-823610,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699399058369633060,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-
823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e27827b1f8d737725,kubernetes.io/config.seen: 2023-11-07T23:17:37.199976374Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3c807966727776e1d4617f5cfc1b2fa5e7e1c89fe73b7a1cdbeaf8ea35e8c8e3,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-823610,Uid:80cdb7ef90168fa2d42b0ffce30b41e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1699399058361643655,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cdb7ef90168fa2d42b0ffce30b41e2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.221:2379,kubernetes.io/config.hash: 80cdb7ef90168fa2d42b0ffce30b41e2,kubernete
s.io/config.seen: 2023-11-07T23:17:37.199971474Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=91ba9904-63cf-4dea-9fde-2457661c130f name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.707077142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=20ea34d4-d959-446d-8598-7b698e49a78d name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.707171321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=20ea34d4-d959-446d-8598-7b698e49a78d name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.707668815Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5366090d1dee4ad53fdaf1d9cb115b4ec2ca8675a6bfa5f99318897ca0fd885,PodSandboxId:a935068f996ae5d2ebcd66fc05a224aa013f6842a9700b7a65df2453cb7a723a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699399296121407698,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-9tc2x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 064592de-ff07-4e29-bd27-962259a8d36d,},Annotations:map[string]string{io.kubernetes.container.hash: 3bfa8ca9,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdddb20dfcbe0b8d5f995b0796d57eecb95367aa565cb14ebc7c0f05f8c30c79,PodSandboxId:90c15798ead33ee9ece6d032c6341a83389fe007b55cf8b317aed12eb2959dff,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699399154882648807,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0ab4206-2f23-414e-916b-c9f8899844cb,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1f390a5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfac0311b15cc4c8c3b78bb8be463bef264740a5c857e3b714fcec3073409124,PodSandboxId:cb07ad35771d5dc5805f44788e270794c399187ba21b8d0b052cfb744e033153,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1699399132621927428,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-sfpgt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0330f2c0-c5ac-4680-b02b-bdfb376cc2c8,},Annotations:map[string]string{io.kubernetes.container.hash: 86a10c80,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8193e63895516ccf192f59930fe8abf50e55195072dd2c6ae20e03d7336380c5,PodSandboxId:cff73313fd2e2809a3c851a2e0a68567e30487ecd53ecbf4cfa00a786ccc4cbc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699399125043960493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4bl4d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8c60fc0-347b-45a4-a657-fa58ed2ae5ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a675a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6fa255c1a01de4fe4eb55284f3a45535fe4f28659ece197beff0470092714bb,PodSandboxId:8d81f40347becba445c0a315ed4942889f4c66e1d8a82074820948ad0c756ed0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699399123860625990,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d5rl4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3748c907-32fd-4e85-b566-1bd4022ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 91ac2cba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a57fd0f1bfe7e758709fb659476f689a7763e8b714b01b42daf9b33195e62,PodSandboxId:73d9d3dca78d99a9ac58f215bcc55765111f96cfdd68cd379d560afe2638fda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699399085818056670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ddb18aa-4fe1-4d13-a4f3-48184e8be2b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51525231,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed61d3a4e9eaf20f51d5015c64ba402eb141e2da20fbf11ddf76a1ee2fa00ce4,PodSandboxId:8b9ea6113e1c5664be3360c026a027fe58c7dd8a0cd70afa36e1fbc3d07f5424,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1699399083065649833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-9grkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e1b17c9-2f8e-4b88-a477-b02f96bf579b,},Annotations:map[string]string{io.kubernetes.container.hash: 97394a5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18dcacbd3ccfbce20bc633cc0aed
be895673a893477600323e0ad8d4b8e17288,PodSandboxId:c1bea85b53c77a7d4b6ce57c2c8a8237d1f188ea590a61f3f727a59cb9f2c0a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1699399082616082073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zxn8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b50630e-210e-43a7-b412-7945186015fd,},Annotations:map[string]string{io.kubernetes.container.hash: 844e15ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2202b84129cc7cd3a006ec448264d8970473d61675296ab76c2c3835b1f1e49,Pod
SandboxId:3c807966727776e1d4617f5cfc1b2fa5e7e1c89fe73b7a1cdbeaf8ea35e8c8e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1699399060425673189,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cdb7ef90168fa2d42b0ffce30b41e2,},Annotations:map[string]string{io.kubernetes.container.hash: e97517f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bbfe19dc7836b4780954b1507db87ce25b04df729898aad0cfc66e67385903,PodSandboxId:7606ec21e72161a5384f17551840f434063b
5dd675f2abdbc1f5fa974e3e4043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1699399059083880464,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ae76ae0dac6775aa1cbee9cb6c908d69150a7d58f4f83fb5b2ba2abadf23fd,PodSandboxId:16195eb063ff9f7d72de6b16e9644069a1c79b6469
dae38e2c057e47d5b764cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1699399058933666616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbde29768dd7e6005684efa343ab63b3117529dbb323c1e56d2b0f27078d7ce,PodSandboxId:7041ca0adf0f
f7d9cca38b371ad5bbf982ebf58eb55ea4c172d03ab4e0787ed3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1699399058886129510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbf86c2f685c2f403850faed8ee51a6a,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9439cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=20ea34d4-d959-446d-8598-7b698e49a78d name=/runtime.v1.RuntimeSer
vice/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.710510674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0185287c-5af1-4855-b352-bb2b262cf6ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.710613090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0185287c-5af1-4855-b352-bb2b262cf6ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 07 23:21:45 ingress-addon-legacy-823610 crio[718]: time="2023-11-07 23:21:45.710980710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5366090d1dee4ad53fdaf1d9cb115b4ec2ca8675a6bfa5f99318897ca0fd885,PodSandboxId:a935068f996ae5d2ebcd66fc05a224aa013f6842a9700b7a65df2453cb7a723a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1699399296121407698,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-9tc2x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 064592de-ff07-4e29-bd27-962259a8d36d,},Annotations:map[string]string{io.kubernetes.container.hash: 3bfa8ca9,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdddb20dfcbe0b8d5f995b0796d57eecb95367aa565cb14ebc7c0f05f8c30c79,PodSandboxId:90c15798ead33ee9ece6d032c6341a83389fe007b55cf8b317aed12eb2959dff,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1699399154882648807,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0ab4206-2f23-414e-916b-c9f8899844cb,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c1f390a5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfac0311b15cc4c8c3b78bb8be463bef264740a5c857e3b714fcec3073409124,PodSandboxId:cb07ad35771d5dc5805f44788e270794c399187ba21b8d0b052cfb744e033153,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1699399132621927428,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-sfpgt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0330f2c0-c5ac-4680-b02b-bdfb376cc2c8,},Annotations:map[string]string{io.kubernetes.container.hash: 86a10c80,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8193e63895516ccf192f59930fe8abf50e55195072dd2c6ae20e03d7336380c5,PodSandboxId:cff73313fd2e2809a3c851a2e0a68567e30487ecd53ecbf4cfa00a786ccc4cbc,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699399125043960493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4bl4d,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8c60fc0-347b-45a4-a657-fa58ed2ae5ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8b4a675a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6fa255c1a01de4fe4eb55284f3a45535fe4f28659ece197beff0470092714bb,PodSandboxId:8d81f40347becba445c0a315ed4942889f4c66e1d8a82074820948ad0c756ed0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1699399123860625990,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d5rl4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3748c907-32fd-4e85-b566-1bd4022ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 91ac2cba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a57fd0f1bfe7e758709fb659476f689a7763e8b714b01b42daf9b33195e62,PodSandboxId:73d9d3dca78d99a9ac58f215bcc55765111f96cfdd68cd379d560afe2638fda8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699399085818056670,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ddb18aa-4fe1-4d13-a4f3-48184e8be2b0,},Annotations:map[string]string{io.kubernetes.container.hash: 51525231,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed61d3a4e9eaf20f51d5015c64ba402eb141e2da20fbf11ddf76a1ee2fa00ce4,PodSandboxId:8b9ea6113e1c5664be3360c026a027fe58c7dd8a0cd70afa36e1fbc3d07f5424,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1699399083065649833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-9grkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e1b17c9-2f8e-4b88-a477-b02f96bf579b,},Annotations:map[string]string{io.kubernetes.container.hash: 97394a5b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18dcacbd3ccfbce20bc633cc0aed
be895673a893477600323e0ad8d4b8e17288,PodSandboxId:c1bea85b53c77a7d4b6ce57c2c8a8237d1f188ea590a61f3f727a59cb9f2c0a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1699399082616082073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zxn8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b50630e-210e-43a7-b412-7945186015fd,},Annotations:map[string]string{io.kubernetes.container.hash: 844e15ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2202b84129cc7cd3a006ec448264d8970473d61675296ab76c2c3835b1f1e49,Pod
SandboxId:3c807966727776e1d4617f5cfc1b2fa5e7e1c89fe73b7a1cdbeaf8ea35e8c8e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1699399060425673189,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80cdb7ef90168fa2d42b0ffce30b41e2,},Annotations:map[string]string{io.kubernetes.container.hash: e97517f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bbfe19dc7836b4780954b1507db87ce25b04df729898aad0cfc66e67385903,PodSandboxId:7606ec21e72161a5384f17551840f434063b
5dd675f2abdbc1f5fa974e3e4043,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1699399059083880464,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3ae76ae0dac6775aa1cbee9cb6c908d69150a7d58f4f83fb5b2ba2abadf23fd,PodSandboxId:16195eb063ff9f7d72de6b16e9644069a1c79b6469
dae38e2c057e47d5b764cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1699399058933666616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbde29768dd7e6005684efa343ab63b3117529dbb323c1e56d2b0f27078d7ce,PodSandboxId:7041ca0adf0f
f7d9cca38b371ad5bbf982ebf58eb55ea4c172d03ab4e0787ed3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1699399058886129510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-823610,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbf86c2f685c2f403850faed8ee51a6a,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9439cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0185287c-5af1-4855-b352-bb2b262cf6ed name=/runtime.v1alpha2.Runt
imeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c5366090d1dee       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            9 seconds ago       Running             hello-world-app           0                   a935068f996ae       hello-world-app-5f5d8b66bb-9tc2x
	fdddb20dfcbe0       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   90c15798ead33       nginx
	dfac0311b15cc       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   cb07ad35771d5       ingress-nginx-controller-7fcf777cb7-sfpgt
	8193e63895516       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   cff73313fd2e2       ingress-nginx-admission-patch-4bl4d
	b6fa255c1a01d       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   8d81f40347bec       ingress-nginx-admission-create-d5rl4
	2f4a57fd0f1bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   73d9d3dca78d9       storage-provisioner
	ed61d3a4e9eaf       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   8b9ea6113e1c5       coredns-66bff467f8-9grkq
	18dcacbd3ccfb       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   c1bea85b53c77       kube-proxy-zxn8s
	c2202b84129cc       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   3c80796672777       etcd-ingress-addon-legacy-823610
	a6bbfe19dc783       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   7606ec21e7216       kube-scheduler-ingress-addon-legacy-823610
	c3ae76ae0dac6       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   16195eb063ff9       kube-controller-manager-ingress-addon-legacy-823610
	ccbde29768dd7       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   7041ca0adf0ff       kube-apiserver-ingress-addon-legacy-823610
	
	* 
	* ==> coredns [ed61d3a4e9eaf20f51d5015c64ba402eb141e2da20fbf11ddf76a1ee2fa00ce4] <==
	* [INFO] 10.244.0.6:50860 - 52 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000158193s
	[INFO] 10.244.0.6:50860 - 3350 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077929s
	[INFO] 10.244.0.6:50860 - 28695 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054217s
	[INFO] 10.244.0.6:50860 - 48288 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000085759s
	[INFO] 10.244.0.6:35416 - 44522 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00017745s
	[INFO] 10.244.0.6:35416 - 45428 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000134099s
	[INFO] 10.244.0.6:35416 - 32610 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000127585s
	[INFO] 10.244.0.6:35416 - 6130 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073011s
	[INFO] 10.244.0.6:35416 - 62522 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000357986s
	[INFO] 10.244.0.6:35416 - 16657 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000887434s
	[INFO] 10.244.0.6:35416 - 16346 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053322s
	[INFO] 10.244.0.6:57234 - 13494 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000183406s
	[INFO] 10.244.0.6:49310 - 20431 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053962s
	[INFO] 10.244.0.6:49310 - 45639 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000095244s
	[INFO] 10.244.0.6:49310 - 10815 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035545s
	[INFO] 10.244.0.6:57234 - 58431 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060371s
	[INFO] 10.244.0.6:49310 - 61761 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042966s
	[INFO] 10.244.0.6:49310 - 46161 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003794s
	[INFO] 10.244.0.6:49310 - 24276 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003964s
	[INFO] 10.244.0.6:49310 - 28233 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034939s
	[INFO] 10.244.0.6:57234 - 20577 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074248s
	[INFO] 10.244.0.6:57234 - 29513 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007532s
	[INFO] 10.244.0.6:57234 - 51613 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080099s
	[INFO] 10.244.0.6:57234 - 42798 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068093s
	[INFO] 10.244.0.6:57234 - 17234 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000143449s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-823610
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-823610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=ingress-addon-legacy-823610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_17_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:17:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-823610
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:21:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:19:17 +0000   Tue, 07 Nov 2023 23:17:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:19:17 +0000   Tue, 07 Nov 2023 23:17:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:19:17 +0000   Tue, 07 Nov 2023 23:17:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:19:17 +0000   Tue, 07 Nov 2023 23:17:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    ingress-addon-legacy-823610
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 b15263f2b226427caafc8a99d84c7c3b
	  System UUID:                b15263f2-b226-427c-aafc-8a99d84c7c3b
	  Boot ID:                    29e93668-7e70-4dc5-b372-ebbc887ae939
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-9tc2x                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 coredns-66bff467f8-9grkq                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-823610                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-apiserver-ingress-addon-legacy-823610             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-823610    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-zxn8s                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-823610             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 4m8s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x4 over 4m8s)  kubelet     Node ingress-addon-legacy-823610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x5 over 4m8s)  kubelet     Node ingress-addon-legacy-823610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x4 over 4m8s)  kubelet     Node ingress-addon-legacy-823610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m58s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m58s                kubelet     Node ingress-addon-legacy-823610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s                kubelet     Node ingress-addon-legacy-823610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s                kubelet     Node ingress-addon-legacy-823610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m48s                kubelet     Node ingress-addon-legacy-823610 status is now: NodeReady
	  Normal  Starting                 3m42s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov 7 23:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093370] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.400388] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.233793] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151989] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.049965] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.309523] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.107662] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.150087] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.116574] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.208779] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +8.067445] systemd-fstab-generator[1032]: Ignoring "noauto" for root device
	[  +3.393264] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.062170] systemd-fstab-generator[1441]: Ignoring "noauto" for root device
	[Nov 7 23:18] kauditd_printk_skb: 6 callbacks suppressed
	[ +32.913398] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.131354] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 7 23:19] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.127251] kauditd_printk_skb: 3 callbacks suppressed
	[Nov 7 23:21] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [c2202b84129cc7cd3a006ec448264d8970473d61675296ab76c2c3835b1f1e49] <==
	* 2023-11-07 23:17:40.594866 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-07 23:17:40.599744 I | etcdserver: e7b0d5fc33cf92f8 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/07 23:17:40 INFO: e7b0d5fc33cf92f8 switched to configuration voters=(16695079097840145144)
	2023-11-07 23:17:40.600814 I | etcdserver/membership: added member e7b0d5fc33cf92f8 [https://192.168.39.221:2380] to cluster c75d0b2482cd9027
	2023-11-07 23:17:40.601464 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-07 23:17:40.601598 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-07 23:17:40.601686 I | embed: listening for peers on 192.168.39.221:2380
	raft2023/11/07 23:17:40 INFO: e7b0d5fc33cf92f8 is starting a new election at term 1
	raft2023/11/07 23:17:40 INFO: e7b0d5fc33cf92f8 became candidate at term 2
	raft2023/11/07 23:17:40 INFO: e7b0d5fc33cf92f8 received MsgVoteResp from e7b0d5fc33cf92f8 at term 2
	raft2023/11/07 23:17:40 INFO: e7b0d5fc33cf92f8 became leader at term 2
	raft2023/11/07 23:17:40 INFO: raft.node: e7b0d5fc33cf92f8 elected leader e7b0d5fc33cf92f8 at term 2
	2023-11-07 23:17:40.679454 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-07 23:17:40.680888 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-07 23:17:40.681047 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-07 23:17:40.681104 I | etcdserver: published {Name:ingress-addon-legacy-823610 ClientURLs:[https://192.168.39.221:2379]} to cluster c75d0b2482cd9027
	2023-11-07 23:17:40.681174 I | embed: ready to serve client requests
	2023-11-07 23:17:40.682426 I | embed: serving client requests on 192.168.39.221:2379
	2023-11-07 23:17:40.682722 I | embed: ready to serve client requests
	2023-11-07 23:17:40.683692 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-07 23:18:02.484581 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3907" took too long (145.667593ms) to execute
	2023-11-07 23:18:02.752814 W | etcdserver: request "header:<ID:10590368094944519553 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-zxn8s.17957a763922e7de\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-zxn8s.17957a763922e7de\" value_size:732 lease:1366996058089743329 >> failure:<>>" with result "size:16" took too long (124.00996ms) to execute
	2023-11-07 23:18:02.819641 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:33279" took too long (123.129552ms) to execute
	2023-11-07 23:18:02.829226 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-tp5sr\" " with result "range_response_count:1 size:4290" took too long (176.176575ms) to execute
	2023-11-07 23:19:20.673086 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2214" took too long (119.029226ms) to execute
	
	* 
	* ==> kernel <==
	*  23:21:46 up 4 min,  0 users,  load average: 0.83, 0.58, 0.25
	Linux ingress-addon-legacy-823610 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ccbde29768dd7e6005684efa343ab63b3117529dbb323c1e56d2b0f27078d7ce] <==
	* I1107 23:17:43.650739       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1107 23:17:43.650768       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1107 23:17:43.700685       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1107 23:17:43.700746       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 23:17:43.700759       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 23:17:43.703674       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:17:43.752481       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1107 23:17:44.594880       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1107 23:17:44.595152       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 23:17:44.606596       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1107 23:17:44.611847       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1107 23:17:44.611897       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1107 23:17:45.069078       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:17:45.107133       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1107 23:17:45.258663       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.221]
	I1107 23:17:45.259614       1 controller.go:609] quota admission added evaluator for: endpoints
	I1107 23:17:45.263428       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 23:17:45.950482       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1107 23:17:46.858425       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1107 23:17:46.919184       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1107 23:17:47.241085       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:18:01.671025       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1107 23:18:01.842004       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1107 23:18:37.655143       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1107 23:19:09.331055       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [c3ae76ae0dac6775aa1cbee9cb6c908d69150a7d58f4f83fb5b2ba2abadf23fd] <==
	* I1107 23:18:01.839766       1 shared_informer.go:230] Caches are synced for deployment 
	I1107 23:18:01.844432       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"805ea10f-158c-4a33-be6e-cef63f620786", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I1107 23:18:01.855583       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I1107 23:18:01.869014       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"5c9febeb-d9d4-4232-8755-cfa80c4aee0e", APIVersion:"apps/v1", ResourceVersion:"346", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-tp5sr
	I1107 23:18:01.874570       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1107 23:18:01.891525       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"5c9febeb-d9d4-4232-8755-cfa80c4aee0e", APIVersion:"apps/v1", ResourceVersion:"346", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-9grkq
	I1107 23:18:01.926162       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1107 23:18:01.955788       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1107 23:18:01.958477       1 shared_informer.go:230] Caches are synced for resource quota 
	I1107 23:18:01.984784       1 shared_informer.go:230] Caches are synced for disruption 
	I1107 23:18:01.984825       1 disruption.go:339] Sending events to api server.
	I1107 23:18:02.005954       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1107 23:18:02.010359       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1107 23:18:02.010479       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 23:18:02.012994       1 shared_informer.go:230] Caches are synced for resource quota 
	I1107 23:18:02.497999       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"805ea10f-158c-4a33-be6e-cef63f620786", APIVersion:"apps/v1", ResourceVersion:"365", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1107 23:18:02.615063       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"5c9febeb-d9d4-4232-8755-cfa80c4aee0e", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-tp5sr
	I1107 23:18:37.631102       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"bbb32936-10fd-442b-9cfe-f6eb786544e0", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1107 23:18:37.668394       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"8a0b09b9-fc0f-4694-bc7a-3d31173aeb31", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-sfpgt
	I1107 23:18:37.681386       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3f654330-482d-4ad7-aa55-9dd72371cf99", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-d5rl4
	I1107 23:18:37.723444       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0c207f68-2961-442f-9d7a-36910b9e8b23", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-4bl4d
	I1107 23:18:44.519009       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3f654330-482d-4ad7-aa55-9dd72371cf99", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1107 23:18:45.526980       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0c207f68-2961-442f-9d7a-36910b9e8b23", APIVersion:"batch/v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1107 23:21:31.697882       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"7516000e-12f2-466a-83d7-1bfbd5803806", APIVersion:"apps/v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1107 23:21:31.719980       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"5b994c45-23f1-4dbc-b266-8223eca609f8", APIVersion:"apps/v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-9tc2x
	
	* 
	* ==> kube-proxy [18dcacbd3ccfbce20bc633cc0aedbe895673a893477600323e0ad8d4b8e17288] <==
	* W1107 23:18:03.746566       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1107 23:18:03.755194       1 node.go:136] Successfully retrieved node IP: 192.168.39.221
	I1107 23:18:03.755244       1 server_others.go:186] Using iptables Proxier.
	I1107 23:18:03.755586       1 server.go:583] Version: v1.18.20
	I1107 23:18:03.759000       1 config.go:315] Starting service config controller
	I1107 23:18:03.759047       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1107 23:18:03.759339       1 config.go:133] Starting endpoints config controller
	I1107 23:18:03.759470       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1107 23:18:03.859419       1 shared_informer.go:230] Caches are synced for service config 
	I1107 23:18:03.863636       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [a6bbfe19dc7836b4780954b1507db87ce25b04df729898aad0cfc66e67385903] <==
	* W1107 23:17:43.674248       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 23:17:43.674257       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 23:17:43.674344       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 23:17:43.698758       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1107 23:17:43.698828       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1107 23:17:43.704831       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1107 23:17:43.705442       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 23:17:43.705483       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 23:17:43.705588       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1107 23:17:43.716941       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:17:43.717104       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 23:17:43.717173       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:17:43.717227       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:17:43.717347       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:17:43.717407       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:17:43.717458       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:17:43.717500       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:17:43.717547       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:17:43.717594       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:17:43.717644       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:17:43.718898       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 23:17:44.546485       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 23:17:44.816936       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:17:44.820941       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1107 23:17:46.405736       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-07 23:17:12 UTC, ends at Tue 2023-11-07 23:21:46 UTC. --
	Nov 07 23:18:46 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:18:46.822071    1448 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c60fc0-347b-45a4-a657-fa58ed2ae5ca-ingress-nginx-admission-token-kjsjl" (OuterVolumeSpecName: "ingress-nginx-admission-token-kjsjl") pod "e8c60fc0-347b-45a4-a657-fa58ed2ae5ca" (UID: "e8c60fc0-347b-45a4-a657-fa58ed2ae5ca"). InnerVolumeSpecName "ingress-nginx-admission-token-kjsjl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:18:46 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:18:46.906792    1448 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-kjsjl" (UniqueName: "kubernetes.io/secret/e8c60fc0-347b-45a4-a657-fa58ed2ae5ca-ingress-nginx-admission-token-kjsjl") on node "ingress-addon-legacy-823610" DevicePath ""
	Nov 07 23:18:47 ingress-addon-legacy-823610 kubelet[1448]: E1107 23:18:47.679705    1448 cadvisor_stats_provider.go:400] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pode8c60fc0-347b-45a4-a657-fa58ed2ae5ca/crio-conmon-cff73313fd2e2809a3c851a2e0a68567e30487ecd53ecbf4cfa00a786ccc4cbc": RecentStats: unable to find data in memory cache]
	Nov 07 23:18:53 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:18:53.860418    1448 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 07 23:18:54 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:18:54.027731    1448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-7hnmv" (UniqueName: "kubernetes.io/secret/38c228df-acc5-4dda-9673-c59e15ca8e92-minikube-ingress-dns-token-7hnmv") pod "kube-ingress-dns-minikube" (UID: "38c228df-acc5-4dda-9673-c59e15ca8e92")
	Nov 07 23:19:09 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:19:09.503792    1448 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 07 23:19:09 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:19:09.679772    1448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-zm6br" (UniqueName: "kubernetes.io/secret/d0ab4206-2f23-414e-916b-c9f8899844cb-default-token-zm6br") pod "nginx" (UID: "d0ab4206-2f23-414e-916b-c9f8899844cb")
	Nov 07 23:21:31 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:31.736754    1448 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 07 23:21:31 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:31.831393    1448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-zm6br" (UniqueName: "kubernetes.io/secret/064592de-ff07-4e29-bd27-962259a8d36d-default-token-zm6br") pod "hello-world-app-5f5d8b66bb-9tc2x" (UID: "064592de-ff07-4e29-bd27-962259a8d36d")
	Nov 07 23:21:33 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:33.640557    1448 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ba1104339f1ca1e67fbcf4311ec5a142dab0d247747f92bd5e18e80016a4e7dc
	Nov 07 23:21:33 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:33.669786    1448 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ba1104339f1ca1e67fbcf4311ec5a142dab0d247747f92bd5e18e80016a4e7dc
	Nov 07 23:21:33 ingress-addon-legacy-823610 kubelet[1448]: E1107 23:21:33.670631    1448 remote_runtime.go:295] ContainerStatus "ba1104339f1ca1e67fbcf4311ec5a142dab0d247747f92bd5e18e80016a4e7dc" from runtime service failed: rpc error: code = NotFound desc = could not find container "ba1104339f1ca1e67fbcf4311ec5a142dab0d247747f92bd5e18e80016a4e7dc": container with ID starting with ba1104339f1ca1e67fbcf4311ec5a142dab0d247747f92bd5e18e80016a4e7dc not found: ID does not exist
	Nov 07 23:21:33 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:33.737190    1448 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-7hnmv" (UniqueName: "kubernetes.io/secret/38c228df-acc5-4dda-9673-c59e15ca8e92-minikube-ingress-dns-token-7hnmv") pod "38c228df-acc5-4dda-9673-c59e15ca8e92" (UID: "38c228df-acc5-4dda-9673-c59e15ca8e92")
	Nov 07 23:21:33 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:33.739445    1448 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c228df-acc5-4dda-9673-c59e15ca8e92-minikube-ingress-dns-token-7hnmv" (OuterVolumeSpecName: "minikube-ingress-dns-token-7hnmv") pod "38c228df-acc5-4dda-9673-c59e15ca8e92" (UID: "38c228df-acc5-4dda-9673-c59e15ca8e92"). InnerVolumeSpecName "minikube-ingress-dns-token-7hnmv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:21:33 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:33.837475    1448 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-7hnmv" (UniqueName: "kubernetes.io/secret/38c228df-acc5-4dda-9673-c59e15ca8e92-minikube-ingress-dns-token-7hnmv") on node "ingress-addon-legacy-823610" DevicePath ""
	Nov 07 23:21:38 ingress-addon-legacy-823610 kubelet[1448]: E1107 23:21:38.253908    1448 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-sfpgt.17957aa86fc90b00", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-sfpgt", UID:"0330f2c0-c5ac-4680-b02b-bdfb376cc2c8", APIVersion:"v1", ResourceVersion:"483", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-823610"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14acf808ef27700, ext:231518635026, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14acf808ef27700, ext:231518635026, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-sfpgt.17957aa86fc90b00" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 07 23:21:38 ingress-addon-legacy-823610 kubelet[1448]: E1107 23:21:38.284590    1448 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-sfpgt.17957aa86fc90b00", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-sfpgt", UID:"0330f2c0-c5ac-4680-b02b-bdfb376cc2c8", APIVersion:"v1", ResourceVersion:"483", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-823610"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14acf808ef27700, ext:231518635026, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14acf80900b734d, ext:231537049691, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-sfpgt.17957aa86fc90b00" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 07 23:21:40 ingress-addon-legacy-823610 kubelet[1448]: W1107 23:21:40.687455    1448 pod_container_deletor.go:77] Container "cb07ad35771d5dc5805f44788e270794c399187ba21b8d0b052cfb744e033153" not found in pod's containers
	Nov 07 23:21:42 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:42.378912    1448 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-kbt4s" (UniqueName: "kubernetes.io/secret/0330f2c0-c5ac-4680-b02b-bdfb376cc2c8-ingress-nginx-token-kbt4s") pod "0330f2c0-c5ac-4680-b02b-bdfb376cc2c8" (UID: "0330f2c0-c5ac-4680-b02b-bdfb376cc2c8")
	Nov 07 23:21:42 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:42.379012    1448 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0330f2c0-c5ac-4680-b02b-bdfb376cc2c8-webhook-cert") pod "0330f2c0-c5ac-4680-b02b-bdfb376cc2c8" (UID: "0330f2c0-c5ac-4680-b02b-bdfb376cc2c8")
	Nov 07 23:21:42 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:42.383741    1448 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0330f2c0-c5ac-4680-b02b-bdfb376cc2c8-ingress-nginx-token-kbt4s" (OuterVolumeSpecName: "ingress-nginx-token-kbt4s") pod "0330f2c0-c5ac-4680-b02b-bdfb376cc2c8" (UID: "0330f2c0-c5ac-4680-b02b-bdfb376cc2c8"). InnerVolumeSpecName "ingress-nginx-token-kbt4s". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:21:42 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:42.384238    1448 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0330f2c0-c5ac-4680-b02b-bdfb376cc2c8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0330f2c0-c5ac-4680-b02b-bdfb376cc2c8" (UID: "0330f2c0-c5ac-4680-b02b-bdfb376cc2c8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:21:42 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:42.479363    1448 reconciler.go:319] Volume detached for volume "ingress-nginx-token-kbt4s" (UniqueName: "kubernetes.io/secret/0330f2c0-c5ac-4680-b02b-bdfb376cc2c8-ingress-nginx-token-kbt4s") on node "ingress-addon-legacy-823610" DevicePath ""
	Nov 07 23:21:42 ingress-addon-legacy-823610 kubelet[1448]: I1107 23:21:42.479426    1448 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0330f2c0-c5ac-4680-b02b-bdfb376cc2c8-webhook-cert") on node "ingress-addon-legacy-823610" DevicePath ""
	Nov 07 23:21:43 ingress-addon-legacy-823610 kubelet[1448]: W1107 23:21:43.326074    1448 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/0330f2c0-c5ac-4680-b02b-bdfb376cc2c8/volumes" does not exist
	
	* 
	* ==> storage-provisioner [2f4a57fd0f1bfe7e758709fb659476f689a7763e8b714b01b42daf9b33195e62] <==
	* I1107 23:18:05.925773       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 23:18:05.935064       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 23:18:05.935210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 23:18:05.947354       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 23:18:05.947601       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-823610_9bd5e1e7-1137-410a-8d4c-37481628200b!
	I1107 23:18:05.949086       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f8ff4c07-663c-4df5-aeb0-9af49f9e6f96", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-823610_9bd5e1e7-1137-410a-8d4c-37481628200b became leader
	I1107 23:18:06.047904       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-823610_9bd5e1e7-1137-410a-8d4c-37481628200b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-823610 -n ingress-addon-legacy-823610
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-823610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (172.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-tvwc7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-tvwc7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-tvwc7 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (194.531554ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-tvwc7): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-z67r2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-z67r2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-z67r2 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (183.690928ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-z67r2): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-553062 -n multinode-553062
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-553062 logs -n 25: (1.358263175s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-460920 ssh -- ls                    | mount-start-2-460920 | jenkins | v1.32.0 | 07 Nov 23 23:25 UTC | 07 Nov 23 23:25 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-460920 ssh --                       | mount-start-2-460920 | jenkins | v1.32.0 | 07 Nov 23 23:25 UTC | 07 Nov 23 23:25 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-460920                           | mount-start-2-460920 | jenkins | v1.32.0 | 07 Nov 23 23:25 UTC | 07 Nov 23 23:25 UTC |
	| start   | -p mount-start-2-460920                           | mount-start-2-460920 | jenkins | v1.32.0 | 07 Nov 23 23:25 UTC | 07 Nov 23 23:26 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-460920 | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC |                     |
	|         | --profile mount-start-2-460920                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-460920 ssh -- ls                    | mount-start-2-460920 | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:26 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-460920 ssh --                       | mount-start-2-460920 | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:26 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-460920                           | mount-start-2-460920 | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:26 UTC |
	| delete  | -p mount-start-1-445045                           | mount-start-1-445045 | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:26 UTC |
	| start   | -p multinode-553062                               | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:26 UTC | 07 Nov 23 23:28 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- apply -f                   | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- rollout                    | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- get pods -o                | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- get pods -o                | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | busybox-5bc68d56bd-tvwc7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | busybox-5bc68d56bd-z67r2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | busybox-5bc68d56bd-tvwc7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | busybox-5bc68d56bd-z67r2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | busybox-5bc68d56bd-tvwc7 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | busybox-5bc68d56bd-z67r2 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- get pods -o                | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | busybox-5bc68d56bd-tvwc7                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC |                     |
	|         | busybox-5bc68d56bd-tvwc7 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC | 07 Nov 23 23:28 UTC |
	|         | busybox-5bc68d56bd-z67r2                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-553062 -- exec                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:28 UTC |                     |
	|         | busybox-5bc68d56bd-z67r2 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:26:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:26:11.186004   29973 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:26:11.186122   29973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:26:11.186131   29973 out.go:309] Setting ErrFile to fd 2...
	I1107 23:26:11.186135   29973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:26:11.186319   29973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1107 23:26:11.186860   29973 out.go:303] Setting JSON to false
	I1107 23:26:11.187653   29973 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4120,"bootTime":1699395451,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:26:11.187708   29973 start.go:138] virtualization: kvm guest
	I1107 23:26:11.190355   29973 out.go:177] * [multinode-553062] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:26:11.191940   29973 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:26:11.193356   29973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:26:11.191964   29973 notify.go:220] Checking for updates...
	I1107 23:26:11.196118   29973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:26:11.197635   29973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:26:11.199129   29973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:26:11.200584   29973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:26:11.202032   29973 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:26:11.235431   29973 out.go:177] * Using the kvm2 driver based on user configuration
	I1107 23:26:11.236887   29973 start.go:298] selected driver: kvm2
	I1107 23:26:11.236897   29973 start.go:902] validating driver "kvm2" against <nil>
	I1107 23:26:11.236907   29973 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:26:11.237511   29973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:26:11.237577   29973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:26:11.251433   29973 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:26:11.251495   29973 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:26:11.251673   29973 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:26:11.251730   29973 cni.go:84] Creating CNI manager for ""
	I1107 23:26:11.251741   29973 cni.go:136] 0 nodes found, recommending kindnet
	I1107 23:26:11.251751   29973 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:26:11.251760   29973 start_flags.go:323] config:
	{Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugi
n:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:26:11.251870   29973 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:26:11.253692   29973 out.go:177] * Starting control plane node multinode-553062 in cluster multinode-553062
	I1107 23:26:11.254948   29973 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:26:11.254985   29973 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:26:11.254995   29973 cache.go:56] Caching tarball of preloaded images
	I1107 23:26:11.255061   29973 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:26:11.255071   29973 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:26:11.255356   29973 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:26:11.255375   29973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json: {Name:mk723c38c0b3ce445277b1a86ead211a0067e5bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:11.255504   29973 start.go:365] acquiring machines lock for multinode-553062: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:26:11.255531   29973 start.go:369] acquired machines lock for "multinode-553062" in 14.431µs
	I1107 23:26:11.255551   29973 start.go:93] Provisioning new machine with config: &{Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-5
53062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:26:11.255611   29973 start.go:125] createHost starting for "" (driver="kvm2")
	I1107 23:26:11.257268   29973 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1107 23:26:11.257383   29973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:26:11.257419   29973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:26:11.270506   29973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I1107 23:26:11.270877   29973 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:26:11.271343   29973 main.go:141] libmachine: Using API Version  1
	I1107 23:26:11.271362   29973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:26:11.271677   29973 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:26:11.271864   29973 main.go:141] libmachine: (multinode-553062) Calling .GetMachineName
	I1107 23:26:11.271990   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:26:11.272122   29973 start.go:159] libmachine.API.Create for "multinode-553062" (driver="kvm2")
	I1107 23:26:11.272153   29973 client.go:168] LocalClient.Create starting
	I1107 23:26:11.272185   29973 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem
	I1107 23:26:11.272225   29973 main.go:141] libmachine: Decoding PEM data...
	I1107 23:26:11.272250   29973 main.go:141] libmachine: Parsing certificate...
	I1107 23:26:11.272324   29973 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem
	I1107 23:26:11.272350   29973 main.go:141] libmachine: Decoding PEM data...
	I1107 23:26:11.272376   29973 main.go:141] libmachine: Parsing certificate...
	I1107 23:26:11.272401   29973 main.go:141] libmachine: Running pre-create checks...
	I1107 23:26:11.272429   29973 main.go:141] libmachine: (multinode-553062) Calling .PreCreateCheck
	I1107 23:26:11.272717   29973 main.go:141] libmachine: (multinode-553062) Calling .GetConfigRaw
	I1107 23:26:11.273085   29973 main.go:141] libmachine: Creating machine...
	I1107 23:26:11.273098   29973 main.go:141] libmachine: (multinode-553062) Calling .Create
	I1107 23:26:11.273205   29973 main.go:141] libmachine: (multinode-553062) Creating KVM machine...
	I1107 23:26:11.274347   29973 main.go:141] libmachine: (multinode-553062) DBG | found existing default KVM network
	I1107 23:26:11.274961   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:11.274818   29995 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1107 23:26:11.279985   29973 main.go:141] libmachine: (multinode-553062) DBG | trying to create private KVM network mk-multinode-553062 192.168.39.0/24...
	I1107 23:26:11.346133   29973 main.go:141] libmachine: (multinode-553062) DBG | private KVM network mk-multinode-553062 192.168.39.0/24 created
	I1107 23:26:11.346172   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:11.346088   29995 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:26:11.346193   29973 main.go:141] libmachine: (multinode-553062) Setting up store path in /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062 ...
	I1107 23:26:11.346228   29973 main.go:141] libmachine: (multinode-553062) Building disk image from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1107 23:26:11.346247   29973 main.go:141] libmachine: (multinode-553062) Downloading /home/jenkins/minikube-integration/17585-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1107 23:26:11.550568   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:11.550461   29995 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa...
	I1107 23:26:12.153861   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:12.153720   29995 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/multinode-553062.rawdisk...
	I1107 23:26:12.153898   29973 main.go:141] libmachine: (multinode-553062) DBG | Writing magic tar header
	I1107 23:26:12.153923   29973 main.go:141] libmachine: (multinode-553062) DBG | Writing SSH key tar header
	I1107 23:26:12.153942   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:12.153827   29995 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062 ...
	I1107 23:26:12.153957   29973 main.go:141] libmachine: (multinode-553062) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062 (perms=drwx------)
	I1107 23:26:12.153970   29973 main.go:141] libmachine: (multinode-553062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062
	I1107 23:26:12.153983   29973 main.go:141] libmachine: (multinode-553062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines
	I1107 23:26:12.154000   29973 main.go:141] libmachine: (multinode-553062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:26:12.154019   29973 main.go:141] libmachine: (multinode-553062) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines (perms=drwxr-xr-x)
	I1107 23:26:12.154037   29973 main.go:141] libmachine: (multinode-553062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647
	I1107 23:26:12.154047   29973 main.go:141] libmachine: (multinode-553062) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube (perms=drwxr-xr-x)
	I1107 23:26:12.154059   29973 main.go:141] libmachine: (multinode-553062) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647 (perms=drwxrwxr-x)
	I1107 23:26:12.154068   29973 main.go:141] libmachine: (multinode-553062) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1107 23:26:12.154079   29973 main.go:141] libmachine: (multinode-553062) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1107 23:26:12.154085   29973 main.go:141] libmachine: (multinode-553062) Creating domain...
	I1107 23:26:12.154094   29973 main.go:141] libmachine: (multinode-553062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1107 23:26:12.154102   29973 main.go:141] libmachine: (multinode-553062) DBG | Checking permissions on dir: /home/jenkins
	I1107 23:26:12.154110   29973 main.go:141] libmachine: (multinode-553062) DBG | Checking permissions on dir: /home
	I1107 23:26:12.154119   29973 main.go:141] libmachine: (multinode-553062) DBG | Skipping /home - not owner
	I1107 23:26:12.155096   29973 main.go:141] libmachine: (multinode-553062) define libvirt domain using xml: 
	I1107 23:26:12.155118   29973 main.go:141] libmachine: (multinode-553062) <domain type='kvm'>
	I1107 23:26:12.155130   29973 main.go:141] libmachine: (multinode-553062)   <name>multinode-553062</name>
	I1107 23:26:12.155146   29973 main.go:141] libmachine: (multinode-553062)   <memory unit='MiB'>2200</memory>
	I1107 23:26:12.155161   29973 main.go:141] libmachine: (multinode-553062)   <vcpu>2</vcpu>
	I1107 23:26:12.155173   29973 main.go:141] libmachine: (multinode-553062)   <features>
	I1107 23:26:12.155183   29973 main.go:141] libmachine: (multinode-553062)     <acpi/>
	I1107 23:26:12.155191   29973 main.go:141] libmachine: (multinode-553062)     <apic/>
	I1107 23:26:12.155207   29973 main.go:141] libmachine: (multinode-553062)     <pae/>
	I1107 23:26:12.155227   29973 main.go:141] libmachine: (multinode-553062)     
	I1107 23:26:12.155241   29973 main.go:141] libmachine: (multinode-553062)   </features>
	I1107 23:26:12.155254   29973 main.go:141] libmachine: (multinode-553062)   <cpu mode='host-passthrough'>
	I1107 23:26:12.155266   29973 main.go:141] libmachine: (multinode-553062)   
	I1107 23:26:12.155276   29973 main.go:141] libmachine: (multinode-553062)   </cpu>
	I1107 23:26:12.155285   29973 main.go:141] libmachine: (multinode-553062)   <os>
	I1107 23:26:12.155302   29973 main.go:141] libmachine: (multinode-553062)     <type>hvm</type>
	I1107 23:26:12.155315   29973 main.go:141] libmachine: (multinode-553062)     <boot dev='cdrom'/>
	I1107 23:26:12.155345   29973 main.go:141] libmachine: (multinode-553062)     <boot dev='hd'/>
	I1107 23:26:12.155368   29973 main.go:141] libmachine: (multinode-553062)     <bootmenu enable='no'/>
	I1107 23:26:12.155390   29973 main.go:141] libmachine: (multinode-553062)   </os>
	I1107 23:26:12.155417   29973 main.go:141] libmachine: (multinode-553062)   <devices>
	I1107 23:26:12.155433   29973 main.go:141] libmachine: (multinode-553062)     <disk type='file' device='cdrom'>
	I1107 23:26:12.155449   29973 main.go:141] libmachine: (multinode-553062)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/boot2docker.iso'/>
	I1107 23:26:12.155463   29973 main.go:141] libmachine: (multinode-553062)       <target dev='hdc' bus='scsi'/>
	I1107 23:26:12.155472   29973 main.go:141] libmachine: (multinode-553062)       <readonly/>
	I1107 23:26:12.155482   29973 main.go:141] libmachine: (multinode-553062)     </disk>
	I1107 23:26:12.155498   29973 main.go:141] libmachine: (multinode-553062)     <disk type='file' device='disk'>
	I1107 23:26:12.155514   29973 main.go:141] libmachine: (multinode-553062)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1107 23:26:12.155532   29973 main.go:141] libmachine: (multinode-553062)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/multinode-553062.rawdisk'/>
	I1107 23:26:12.155546   29973 main.go:141] libmachine: (multinode-553062)       <target dev='hda' bus='virtio'/>
	I1107 23:26:12.155557   29973 main.go:141] libmachine: (multinode-553062)     </disk>
	I1107 23:26:12.155579   29973 main.go:141] libmachine: (multinode-553062)     <interface type='network'>
	I1107 23:26:12.155594   29973 main.go:141] libmachine: (multinode-553062)       <source network='mk-multinode-553062'/>
	I1107 23:26:12.155607   29973 main.go:141] libmachine: (multinode-553062)       <model type='virtio'/>
	I1107 23:26:12.155636   29973 main.go:141] libmachine: (multinode-553062)     </interface>
	I1107 23:26:12.155651   29973 main.go:141] libmachine: (multinode-553062)     <interface type='network'>
	I1107 23:26:12.155665   29973 main.go:141] libmachine: (multinode-553062)       <source network='default'/>
	I1107 23:26:12.155677   29973 main.go:141] libmachine: (multinode-553062)       <model type='virtio'/>
	I1107 23:26:12.155691   29973 main.go:141] libmachine: (multinode-553062)     </interface>
	I1107 23:26:12.155704   29973 main.go:141] libmachine: (multinode-553062)     <serial type='pty'>
	I1107 23:26:12.155720   29973 main.go:141] libmachine: (multinode-553062)       <target port='0'/>
	I1107 23:26:12.155731   29973 main.go:141] libmachine: (multinode-553062)     </serial>
	I1107 23:26:12.155743   29973 main.go:141] libmachine: (multinode-553062)     <console type='pty'>
	I1107 23:26:12.155760   29973 main.go:141] libmachine: (multinode-553062)       <target type='serial' port='0'/>
	I1107 23:26:12.155773   29973 main.go:141] libmachine: (multinode-553062)     </console>
	I1107 23:26:12.155785   29973 main.go:141] libmachine: (multinode-553062)     <rng model='virtio'>
	I1107 23:26:12.155800   29973 main.go:141] libmachine: (multinode-553062)       <backend model='random'>/dev/random</backend>
	I1107 23:26:12.155808   29973 main.go:141] libmachine: (multinode-553062)     </rng>
	I1107 23:26:12.155820   29973 main.go:141] libmachine: (multinode-553062)     
	I1107 23:26:12.155835   29973 main.go:141] libmachine: (multinode-553062)     
	I1107 23:26:12.155852   29973 main.go:141] libmachine: (multinode-553062)   </devices>
	I1107 23:26:12.155863   29973 main.go:141] libmachine: (multinode-553062) </domain>
	I1107 23:26:12.155877   29973 main.go:141] libmachine: (multinode-553062) 
	I1107 23:26:12.160006   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:b1:e8:bb in network default
	I1107 23:26:12.160608   29973 main.go:141] libmachine: (multinode-553062) Ensuring networks are active...
	I1107 23:26:12.160632   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:12.161298   29973 main.go:141] libmachine: (multinode-553062) Ensuring network default is active
	I1107 23:26:12.161567   29973 main.go:141] libmachine: (multinode-553062) Ensuring network mk-multinode-553062 is active
	I1107 23:26:12.162003   29973 main.go:141] libmachine: (multinode-553062) Getting domain xml...
	I1107 23:26:12.162670   29973 main.go:141] libmachine: (multinode-553062) Creating domain...
	I1107 23:26:13.353577   29973 main.go:141] libmachine: (multinode-553062) Waiting to get IP...
	I1107 23:26:13.354266   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:13.354609   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:13.354642   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:13.354604   29995 retry.go:31] will retry after 250.913567ms: waiting for machine to come up
	I1107 23:26:13.607053   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:13.607465   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:13.607500   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:13.607427   29995 retry.go:31] will retry after 332.514758ms: waiting for machine to come up
	I1107 23:26:13.941807   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:13.942254   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:13.942280   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:13.942201   29995 retry.go:31] will retry after 458.608971ms: waiting for machine to come up
	I1107 23:26:14.402647   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:14.403056   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:14.403082   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:14.403016   29995 retry.go:31] will retry after 534.292488ms: waiting for machine to come up
	I1107 23:26:14.938631   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:14.939014   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:14.939039   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:14.938988   29995 retry.go:31] will retry after 667.874557ms: waiting for machine to come up
	I1107 23:26:15.608728   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:15.609136   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:15.609172   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:15.609087   29995 retry.go:31] will retry after 768.822507ms: waiting for machine to come up
	I1107 23:26:16.379677   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:16.380028   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:16.380057   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:16.380003   29995 retry.go:31] will retry after 829.309377ms: waiting for machine to come up
	I1107 23:26:17.210932   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:17.211338   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:17.211368   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:17.211299   29995 retry.go:31] will retry after 1.238425845s: waiting for machine to come up
	I1107 23:26:18.451629   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:18.452058   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:18.452083   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:18.452013   29995 retry.go:31] will retry after 1.357307488s: waiting for machine to come up
	I1107 23:26:19.811370   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:19.811715   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:19.811741   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:19.811681   29995 retry.go:31] will retry after 1.788510357s: waiting for machine to come up
	I1107 23:26:21.601220   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:21.601655   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:21.601677   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:21.601611   29995 retry.go:31] will retry after 2.472859703s: waiting for machine to come up
	I1107 23:26:24.077086   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:24.077546   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:24.077577   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:24.077501   29995 retry.go:31] will retry after 3.570318166s: waiting for machine to come up
	I1107 23:26:27.649679   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:27.650109   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:27.650137   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:27.650071   29995 retry.go:31] will retry after 3.885741401s: waiting for machine to come up
	I1107 23:26:31.537682   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:31.538044   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:26:31.538070   29973 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:26:31.537998   29995 retry.go:31] will retry after 3.789840616s: waiting for machine to come up
	I1107 23:26:35.331866   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.332273   29973 main.go:141] libmachine: (multinode-553062) Found IP for machine: 192.168.39.246
	I1107 23:26:35.332295   29973 main.go:141] libmachine: (multinode-553062) Reserving static IP address...
	I1107 23:26:35.332311   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has current primary IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.332701   29973 main.go:141] libmachine: (multinode-553062) DBG | unable to find host DHCP lease matching {name: "multinode-553062", mac: "52:54:00:a6:51:99", ip: "192.168.39.246"} in network mk-multinode-553062
	I1107 23:26:35.402700   29973 main.go:141] libmachine: (multinode-553062) DBG | Getting to WaitForSSH function...
	I1107 23:26:35.402747   29973 main.go:141] libmachine: (multinode-553062) Reserved static IP address: 192.168.39.246
	I1107 23:26:35.402762   29973 main.go:141] libmachine: (multinode-553062) Waiting for SSH to be available...
	I1107 23:26:35.405227   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.405650   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:35.405684   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.405790   29973 main.go:141] libmachine: (multinode-553062) DBG | Using SSH client type: external
	I1107 23:26:35.405822   29973 main.go:141] libmachine: (multinode-553062) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa (-rw-------)
	I1107 23:26:35.405871   29973 main.go:141] libmachine: (multinode-553062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1107 23:26:35.405894   29973 main.go:141] libmachine: (multinode-553062) DBG | About to run SSH command:
	I1107 23:26:35.405919   29973 main.go:141] libmachine: (multinode-553062) DBG | exit 0
	I1107 23:26:35.504698   29973 main.go:141] libmachine: (multinode-553062) DBG | SSH cmd err, output: <nil>: 
	I1107 23:26:35.504942   29973 main.go:141] libmachine: (multinode-553062) KVM machine creation complete!
	I1107 23:26:35.505277   29973 main.go:141] libmachine: (multinode-553062) Calling .GetConfigRaw
	I1107 23:26:35.505795   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:26:35.505981   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:26:35.506153   29973 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1107 23:26:35.506170   29973 main.go:141] libmachine: (multinode-553062) Calling .GetState
	I1107 23:26:35.507533   29973 main.go:141] libmachine: Detecting operating system of created instance...
	I1107 23:26:35.507559   29973 main.go:141] libmachine: Waiting for SSH to be available...
	I1107 23:26:35.507567   29973 main.go:141] libmachine: Getting to WaitForSSH function...
	I1107 23:26:35.507576   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:35.510072   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.510399   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:35.510429   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.510537   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:35.510737   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:35.510880   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:35.511010   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:35.511153   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:26:35.511689   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:26:35.511708   29973 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1107 23:26:35.639921   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:26:35.639945   29973 main.go:141] libmachine: Detecting the provisioner...
	I1107 23:26:35.639955   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:35.642613   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.642926   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:35.642949   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.643108   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:35.643301   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:35.643464   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:35.643597   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:35.643778   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:26:35.644153   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:26:35.644166   29973 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1107 23:26:35.777583   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb75713b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1107 23:26:35.777675   29973 main.go:141] libmachine: found compatible host: buildroot
	I1107 23:26:35.777706   29973 main.go:141] libmachine: Provisioning with buildroot...
	I1107 23:26:35.777721   29973 main.go:141] libmachine: (multinode-553062) Calling .GetMachineName
	I1107 23:26:35.777966   29973 buildroot.go:166] provisioning hostname "multinode-553062"
	I1107 23:26:35.777992   29973 main.go:141] libmachine: (multinode-553062) Calling .GetMachineName
	I1107 23:26:35.778187   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:35.780898   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.781248   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:35.781275   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.781399   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:35.781572   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:35.781724   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:35.781858   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:35.782038   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:26:35.782372   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:26:35.782389   29973 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553062 && echo "multinode-553062" | sudo tee /etc/hostname
	I1107 23:26:35.926305   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553062
	
	I1107 23:26:35.926335   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:35.929295   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.929754   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:35.929791   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:35.930069   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:35.930271   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:35.930455   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:35.930637   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:35.930815   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:26:35.931125   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:26:35.931144   29973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553062/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:26:36.069416   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:26:36.069453   29973 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1107 23:26:36.069476   29973 buildroot.go:174] setting up certificates
	I1107 23:26:36.069491   29973 provision.go:83] configureAuth start
	I1107 23:26:36.069503   29973 main.go:141] libmachine: (multinode-553062) Calling .GetMachineName
	I1107 23:26:36.069817   29973 main.go:141] libmachine: (multinode-553062) Calling .GetIP
	I1107 23:26:36.072086   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.072498   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:36.072526   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.072652   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:36.074771   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.075096   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:36.075119   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.075261   29973 provision.go:138] copyHostCerts
	I1107 23:26:36.075289   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:26:36.075322   29973 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1107 23:26:36.075337   29973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:26:36.075400   29973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1107 23:26:36.075484   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:26:36.075501   29973 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1107 23:26:36.075505   29973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:26:36.075534   29973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1107 23:26:36.075591   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:26:36.075607   29973 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1107 23:26:36.075615   29973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:26:36.075634   29973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1107 23:26:36.075687   29973 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.multinode-553062 san=[192.168.39.246 192.168.39.246 localhost 127.0.0.1 minikube multinode-553062]
	I1107 23:26:36.354392   29973 provision.go:172] copyRemoteCerts
	I1107 23:26:36.354450   29973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:26:36.354469   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:36.357337   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.357720   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:36.357747   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.357944   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:36.358127   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:36.358276   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:36.358454   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:26:36.453550   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:26:36.453627   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:26:36.475659   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:26:36.475743   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1107 23:26:36.497277   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:26:36.497341   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:26:36.519393   29973 provision.go:86] duration metric: configureAuth took 449.89077ms
	I1107 23:26:36.519413   29973 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:26:36.519565   29973 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:26:36.519641   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:36.522219   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.522636   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:36.522669   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.522839   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:36.523030   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:36.523149   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:36.523297   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:36.523431   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:26:36.523745   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:26:36.523760   29973 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:26:36.840329   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:26:36.840359   29973 main.go:141] libmachine: Checking connection to Docker...
	I1107 23:26:36.840372   29973 main.go:141] libmachine: (multinode-553062) Calling .GetURL
	I1107 23:26:36.841782   29973 main.go:141] libmachine: (multinode-553062) DBG | Using libvirt version 6000000
	I1107 23:26:36.844262   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.844645   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:36.844687   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.844777   29973 main.go:141] libmachine: Docker is up and running!
	I1107 23:26:36.844792   29973 main.go:141] libmachine: Reticulating splines...
	I1107 23:26:36.844798   29973 client.go:171] LocalClient.Create took 25.572634891s
	I1107 23:26:36.844829   29973 start.go:167] duration metric: libmachine.API.Create for "multinode-553062" took 25.572695403s
	I1107 23:26:36.844839   29973 start.go:300] post-start starting for "multinode-553062" (driver="kvm2")
	I1107 23:26:36.844852   29973 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:26:36.844872   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:26:36.845117   29973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:26:36.845146   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:36.847248   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.847529   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:36.847559   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.847690   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:36.847877   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:36.848045   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:36.848200   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:26:36.943013   29973 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:26:36.946933   29973 command_runner.go:130] > NAME=Buildroot
	I1107 23:26:36.946949   29973 command_runner.go:130] > VERSION=2021.02.12-1-gb75713b-dirty
	I1107 23:26:36.946954   29973 command_runner.go:130] > ID=buildroot
	I1107 23:26:36.946959   29973 command_runner.go:130] > VERSION_ID=2021.02.12
	I1107 23:26:36.946966   29973 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1107 23:26:36.947183   29973 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:26:36.947204   29973 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1107 23:26:36.947254   29973 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1107 23:26:36.947336   29973 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1107 23:26:36.947347   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /etc/ssl/certs/168482.pem
	I1107 23:26:36.947427   29973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:26:36.956094   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:26:36.977144   29973 start.go:303] post-start completed in 132.293589ms
	I1107 23:26:36.977193   29973 main.go:141] libmachine: (multinode-553062) Calling .GetConfigRaw
	I1107 23:26:36.977738   29973 main.go:141] libmachine: (multinode-553062) Calling .GetIP
	I1107 23:26:36.980114   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.980403   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:36.980449   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.980641   29973 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:26:36.980800   29973 start.go:128] duration metric: createHost completed in 25.725180731s
	I1107 23:26:36.980835   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:36.982745   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.983054   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:36.983081   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:36.983198   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:36.983341   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:36.983463   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:36.983580   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:36.983733   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:26:36.984061   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:26:36.984073   29973 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:26:37.113405   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699399597.084606767
	
	I1107 23:26:37.113425   29973 fix.go:206] guest clock: 1699399597.084606767
	I1107 23:26:37.113432   29973 fix.go:219] Guest: 2023-11-07 23:26:37.084606767 +0000 UTC Remote: 2023-11-07 23:26:36.980810695 +0000 UTC m=+25.842339378 (delta=103.796072ms)
	I1107 23:26:37.113448   29973 fix.go:190] guest clock delta is within tolerance: 103.796072ms
	I1107 23:26:37.113452   29973 start.go:83] releasing machines lock for "multinode-553062", held for 25.85791326s
	I1107 23:26:37.113483   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:26:37.113737   29973 main.go:141] libmachine: (multinode-553062) Calling .GetIP
	I1107 23:26:37.116204   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:37.116556   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:37.116587   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:37.116680   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:26:37.117204   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:26:37.117356   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:26:37.117433   29973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:26:37.117473   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:37.117578   29973 ssh_runner.go:195] Run: cat /version.json
	I1107 23:26:37.117603   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:26:37.119742   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:37.120018   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:37.120046   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:37.120072   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:37.120190   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:37.120359   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:37.120461   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:37.120487   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:37.120492   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:37.120633   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:26:37.120659   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:26:37.120803   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:26:37.120948   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:26:37.121077   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:26:37.216919   29973 command_runner.go:130] > {"iso_version": "v1.32.1", "kicbase_version": "v0.0.41-1698881667-17516", "minikube_version": "v1.32.0", "commit": "0b29983f4bdc1ad55180ee43e3f34cae6c24dee4"}
	I1107 23:26:37.217060   29973 ssh_runner.go:195] Run: systemctl --version
	I1107 23:26:37.239875   29973 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 23:26:37.239936   29973 command_runner.go:130] > systemd 247 (247)
	I1107 23:26:37.239967   29973 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1107 23:26:37.240049   29973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:26:37.396087   29973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:26:37.401852   29973 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1107 23:26:37.401951   29973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:26:37.402032   29973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:26:37.416260   29973 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1107 23:26:37.416353   29973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:26:37.416366   29973 start.go:472] detecting cgroup driver to use...
	I1107 23:26:37.416412   29973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:26:37.433209   29973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:26:37.444508   29973 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:26:37.444552   29973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:26:37.456303   29973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:26:37.467856   29973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:26:37.569205   29973 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1107 23:26:37.569326   29973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:26:37.583744   29973 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1107 23:26:37.689982   29973 docker.go:219] disabling docker service ...
	I1107 23:26:37.690049   29973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:26:37.703223   29973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:26:37.713882   29973 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1107 23:26:37.714770   29973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:26:37.728011   29973 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1107 23:26:37.822638   29973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:26:37.918759   29973 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1107 23:26:37.918790   29973 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1107 23:26:37.918859   29973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:26:37.930857   29973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:26:37.947781   29973 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1107 23:26:37.947824   29973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:26:37.947872   29973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:26:37.956642   29973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:26:37.956702   29973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:26:37.965344   29973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:26:37.974080   29973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:26:37.982896   29973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:26:37.992098   29973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:26:38.000024   29973 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1107 23:26:38.000055   29973 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1107 23:26:38.000088   29973 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1107 23:26:38.011852   29973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:26:38.019852   29973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:26:38.124682   29973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:26:38.287219   29973 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:26:38.287294   29973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:26:38.291680   29973 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1107 23:26:38.291719   29973 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 23:26:38.291730   29973 command_runner.go:130] > Device: 16h/22d	Inode: 715         Links: 1
	I1107 23:26:38.291742   29973 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:26:38.291751   29973 command_runner.go:130] > Access: 2023-11-07 23:26:38.246232207 +0000
	I1107 23:26:38.291758   29973 command_runner.go:130] > Modify: 2023-11-07 23:26:38.246232207 +0000
	I1107 23:26:38.291767   29973 command_runner.go:130] > Change: 2023-11-07 23:26:38.246232207 +0000
	I1107 23:26:38.291773   29973 command_runner.go:130] >  Birth: -
	I1107 23:26:38.291799   29973 start.go:540] Will wait 60s for crictl version
	I1107 23:26:38.291957   29973 ssh_runner.go:195] Run: which crictl
	I1107 23:26:38.295586   29973 command_runner.go:130] > /usr/bin/crictl
	I1107 23:26:38.295662   29973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:26:38.335179   29973 command_runner.go:130] > Version:  0.1.0
	I1107 23:26:38.335207   29973 command_runner.go:130] > RuntimeName:  cri-o
	I1107 23:26:38.335215   29973 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1107 23:26:38.335223   29973 command_runner.go:130] > RuntimeApiVersion:  v1
	I1107 23:26:38.335300   29973 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1107 23:26:38.335378   29973 ssh_runner.go:195] Run: crio --version
	I1107 23:26:38.380624   29973 command_runner.go:130] > crio version 1.24.1
	I1107 23:26:38.380651   29973 command_runner.go:130] > Version:          1.24.1
	I1107 23:26:38.380663   29973 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:26:38.380668   29973 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:26:38.380674   29973 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:26:38.380693   29973 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:26:38.380698   29973 command_runner.go:130] > Compiler:         gc
	I1107 23:26:38.380703   29973 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:26:38.380711   29973 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:26:38.380725   29973 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:26:38.380744   29973 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:26:38.380751   29973 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:26:38.381939   29973 ssh_runner.go:195] Run: crio --version
	I1107 23:26:38.423179   29973 command_runner.go:130] > crio version 1.24.1
	I1107 23:26:38.423200   29973 command_runner.go:130] > Version:          1.24.1
	I1107 23:26:38.423207   29973 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:26:38.423212   29973 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:26:38.423221   29973 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:26:38.423226   29973 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:26:38.423230   29973 command_runner.go:130] > Compiler:         gc
	I1107 23:26:38.423234   29973 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:26:38.423240   29973 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:26:38.423248   29973 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:26:38.423262   29973 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:26:38.423272   29973 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:26:38.425351   29973 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1107 23:26:38.426854   29973 main.go:141] libmachine: (multinode-553062) Calling .GetIP
	I1107 23:26:38.429635   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:38.430153   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:26:38.430183   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:26:38.430404   29973 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:26:38.434478   29973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:26:38.447201   29973 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:26:38.447261   29973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:26:38.479334   29973 command_runner.go:130] > {
	I1107 23:26:38.479357   29973 command_runner.go:130] >   "images": [
	I1107 23:26:38.479361   29973 command_runner.go:130] >   ]
	I1107 23:26:38.479364   29973 command_runner.go:130] > }
	I1107 23:26:38.479460   29973 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1107 23:26:38.479509   29973 ssh_runner.go:195] Run: which lz4
	I1107 23:26:38.483055   29973 command_runner.go:130] > /usr/bin/lz4
	I1107 23:26:38.483206   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1107 23:26:38.483288   29973 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 23:26:38.487307   29973 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:26:38.487351   29973 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:26:38.487381   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1107 23:26:40.213540   29973 crio.go:444] Took 1.730274 seconds to copy over tarball
	I1107 23:26:40.213596   29973 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 23:26:43.100672   29973 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.887048816s)
	I1107 23:26:43.100722   29973 crio.go:451] Took 2.887141 seconds to extract the tarball
	I1107 23:26:43.100739   29973 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 23:26:43.141047   29973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:26:43.210888   29973 command_runner.go:130] > {
	I1107 23:26:43.210907   29973 command_runner.go:130] >   "images": [
	I1107 23:26:43.210911   29973 command_runner.go:130] >     {
	I1107 23:26:43.210919   29973 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1107 23:26:43.210923   29973 command_runner.go:130] >       "repoTags": [
	I1107 23:26:43.210934   29973 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1107 23:26:43.210938   29973 command_runner.go:130] >       ],
	I1107 23:26:43.210943   29973 command_runner.go:130] >       "repoDigests": [
	I1107 23:26:43.210951   29973 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1107 23:26:43.210958   29973 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1107 23:26:43.210962   29973 command_runner.go:130] >       ],
	I1107 23:26:43.210966   29973 command_runner.go:130] >       "size": "65258016",
	I1107 23:26:43.210971   29973 command_runner.go:130] >       "uid": null,
	I1107 23:26:43.210975   29973 command_runner.go:130] >       "username": "",
	I1107 23:26:43.210982   29973 command_runner.go:130] >       "spec": null,
	I1107 23:26:43.210990   29973 command_runner.go:130] >       "pinned": false
	I1107 23:26:43.210993   29973 command_runner.go:130] >     },
	I1107 23:26:43.211005   29973 command_runner.go:130] >     {
	I1107 23:26:43.211014   29973 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1107 23:26:43.211018   29973 command_runner.go:130] >       "repoTags": [
	I1107 23:26:43.211023   29973 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 23:26:43.211030   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211034   29973 command_runner.go:130] >       "repoDigests": [
	I1107 23:26:43.211045   29973 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1107 23:26:43.211055   29973 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1107 23:26:43.211061   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211069   29973 command_runner.go:130] >       "size": "31470524",
	I1107 23:26:43.211076   29973 command_runner.go:130] >       "uid": null,
	I1107 23:26:43.211080   29973 command_runner.go:130] >       "username": "",
	I1107 23:26:43.211090   29973 command_runner.go:130] >       "spec": null,
	I1107 23:26:43.211100   29973 command_runner.go:130] >       "pinned": false
	I1107 23:26:43.211105   29973 command_runner.go:130] >     },
	I1107 23:26:43.211114   29973 command_runner.go:130] >     {
	I1107 23:26:43.211124   29973 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1107 23:26:43.211134   29973 command_runner.go:130] >       "repoTags": [
	I1107 23:26:43.211147   29973 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1107 23:26:43.211156   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211171   29973 command_runner.go:130] >       "repoDigests": [
	I1107 23:26:43.211182   29973 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1107 23:26:43.211193   29973 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1107 23:26:43.211202   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211214   29973 command_runner.go:130] >       "size": "53621675",
	I1107 23:26:43.211224   29973 command_runner.go:130] >       "uid": null,
	I1107 23:26:43.211233   29973 command_runner.go:130] >       "username": "",
	I1107 23:26:43.211243   29973 command_runner.go:130] >       "spec": null,
	I1107 23:26:43.211252   29973 command_runner.go:130] >       "pinned": false
	I1107 23:26:43.211260   29973 command_runner.go:130] >     },
	I1107 23:26:43.211266   29973 command_runner.go:130] >     {
	I1107 23:26:43.211279   29973 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1107 23:26:43.211289   29973 command_runner.go:130] >       "repoTags": [
	I1107 23:26:43.211297   29973 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1107 23:26:43.211306   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211313   29973 command_runner.go:130] >       "repoDigests": [
	I1107 23:26:43.211327   29973 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1107 23:26:43.211341   29973 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1107 23:26:43.211356   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211363   29973 command_runner.go:130] >       "size": "295456551",
	I1107 23:26:43.211367   29973 command_runner.go:130] >       "uid": {
	I1107 23:26:43.211374   29973 command_runner.go:130] >         "value": "0"
	I1107 23:26:43.211380   29973 command_runner.go:130] >       },
	I1107 23:26:43.211386   29973 command_runner.go:130] >       "username": "",
	I1107 23:26:43.211391   29973 command_runner.go:130] >       "spec": null,
	I1107 23:26:43.211396   29973 command_runner.go:130] >       "pinned": false
	I1107 23:26:43.211400   29973 command_runner.go:130] >     },
	I1107 23:26:43.211404   29973 command_runner.go:130] >     {
	I1107 23:26:43.211410   29973 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1107 23:26:43.211416   29973 command_runner.go:130] >       "repoTags": [
	I1107 23:26:43.211424   29973 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1107 23:26:43.211430   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211435   29973 command_runner.go:130] >       "repoDigests": [
	I1107 23:26:43.211444   29973 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1107 23:26:43.211451   29973 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1107 23:26:43.211457   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211461   29973 command_runner.go:130] >       "size": "127165392",
	I1107 23:26:43.211465   29973 command_runner.go:130] >       "uid": {
	I1107 23:26:43.211470   29973 command_runner.go:130] >         "value": "0"
	I1107 23:26:43.211476   29973 command_runner.go:130] >       },
	I1107 23:26:43.211482   29973 command_runner.go:130] >       "username": "",
	I1107 23:26:43.211489   29973 command_runner.go:130] >       "spec": null,
	I1107 23:26:43.211493   29973 command_runner.go:130] >       "pinned": false
	I1107 23:26:43.211499   29973 command_runner.go:130] >     },
	I1107 23:26:43.211502   29973 command_runner.go:130] >     {
	I1107 23:26:43.211508   29973 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1107 23:26:43.211512   29973 command_runner.go:130] >       "repoTags": [
	I1107 23:26:43.211517   29973 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1107 23:26:43.211521   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211525   29973 command_runner.go:130] >       "repoDigests": [
	I1107 23:26:43.211532   29973 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1107 23:26:43.211539   29973 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1107 23:26:43.211542   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211547   29973 command_runner.go:130] >       "size": "123188534",
	I1107 23:26:43.211550   29973 command_runner.go:130] >       "uid": {
	I1107 23:26:43.211554   29973 command_runner.go:130] >         "value": "0"
	I1107 23:26:43.211557   29973 command_runner.go:130] >       },
	I1107 23:26:43.211561   29973 command_runner.go:130] >       "username": "",
	I1107 23:26:43.211567   29973 command_runner.go:130] >       "spec": null,
	I1107 23:26:43.211571   29973 command_runner.go:130] >       "pinned": false
	I1107 23:26:43.211574   29973 command_runner.go:130] >     },
	I1107 23:26:43.211578   29973 command_runner.go:130] >     {
	I1107 23:26:43.211584   29973 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1107 23:26:43.211592   29973 command_runner.go:130] >       "repoTags": [
	I1107 23:26:43.211597   29973 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1107 23:26:43.211603   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211607   29973 command_runner.go:130] >       "repoDigests": [
	I1107 23:26:43.211614   29973 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1107 23:26:43.211623   29973 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1107 23:26:43.211627   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211631   29973 command_runner.go:130] >       "size": "74691991",
	I1107 23:26:43.211637   29973 command_runner.go:130] >       "uid": null,
	I1107 23:26:43.211641   29973 command_runner.go:130] >       "username": "",
	I1107 23:26:43.211645   29973 command_runner.go:130] >       "spec": null,
	I1107 23:26:43.211649   29973 command_runner.go:130] >       "pinned": false
	I1107 23:26:43.211652   29973 command_runner.go:130] >     },
	I1107 23:26:43.211658   29973 command_runner.go:130] >     {
	I1107 23:26:43.211667   29973 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1107 23:26:43.211671   29973 command_runner.go:130] >       "repoTags": [
	I1107 23:26:43.211677   29973 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1107 23:26:43.211683   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211687   29973 command_runner.go:130] >       "repoDigests": [
	I1107 23:26:43.211779   29973 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1107 23:26:43.211801   29973 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1107 23:26:43.211807   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211814   29973 command_runner.go:130] >       "size": "61498678",
	I1107 23:26:43.211823   29973 command_runner.go:130] >       "uid": {
	I1107 23:26:43.211830   29973 command_runner.go:130] >         "value": "0"
	I1107 23:26:43.211839   29973 command_runner.go:130] >       },
	I1107 23:26:43.211846   29973 command_runner.go:130] >       "username": "",
	I1107 23:26:43.211855   29973 command_runner.go:130] >       "spec": null,
	I1107 23:26:43.211861   29973 command_runner.go:130] >       "pinned": false
	I1107 23:26:43.211870   29973 command_runner.go:130] >     },
	I1107 23:26:43.211877   29973 command_runner.go:130] >     {
	I1107 23:26:43.211895   29973 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1107 23:26:43.211905   29973 command_runner.go:130] >       "repoTags": [
	I1107 23:26:43.211912   29973 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1107 23:26:43.211921   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211926   29973 command_runner.go:130] >       "repoDigests": [
	I1107 23:26:43.211943   29973 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1107 23:26:43.211958   29973 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1107 23:26:43.211967   29973 command_runner.go:130] >       ],
	I1107 23:26:43.211974   29973 command_runner.go:130] >       "size": "750414",
	I1107 23:26:43.211983   29973 command_runner.go:130] >       "uid": {
	I1107 23:26:43.211991   29973 command_runner.go:130] >         "value": "65535"
	I1107 23:26:43.212005   29973 command_runner.go:130] >       },
	I1107 23:26:43.212014   29973 command_runner.go:130] >       "username": "",
	I1107 23:26:43.212024   29973 command_runner.go:130] >       "spec": null,
	I1107 23:26:43.212031   29973 command_runner.go:130] >       "pinned": false
	I1107 23:26:43.212040   29973 command_runner.go:130] >     }
	I1107 23:26:43.212045   29973 command_runner.go:130] >   ]
	I1107 23:26:43.212055   29973 command_runner.go:130] > }
	I1107 23:26:43.212231   29973 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:26:43.212250   29973 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:26:43.212323   29973 ssh_runner.go:195] Run: crio config
	I1107 23:26:43.265599   29973 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1107 23:26:43.265630   29973 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1107 23:26:43.265641   29973 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1107 23:26:43.265647   29973 command_runner.go:130] > #
	I1107 23:26:43.265658   29973 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1107 23:26:43.265668   29973 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1107 23:26:43.265682   29973 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1107 23:26:43.265702   29973 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1107 23:26:43.265711   29973 command_runner.go:130] > # reload'.
	I1107 23:26:43.265721   29973 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1107 23:26:43.265734   29973 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1107 23:26:43.265750   29973 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1107 23:26:43.265765   29973 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1107 23:26:43.265771   29973 command_runner.go:130] > [crio]
	I1107 23:26:43.265781   29973 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1107 23:26:43.265792   29973 command_runner.go:130] > # containers images, in this directory.
	I1107 23:26:43.265821   29973 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1107 23:26:43.265840   29973 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1107 23:26:43.266282   29973 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1107 23:26:43.266300   29973 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1107 23:26:43.266310   29973 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1107 23:26:43.266612   29973 command_runner.go:130] > storage_driver = "overlay"
	I1107 23:26:43.266628   29973 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1107 23:26:43.266638   29973 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1107 23:26:43.266644   29973 command_runner.go:130] > storage_option = [
	I1107 23:26:43.266872   29973 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1107 23:26:43.266926   29973 command_runner.go:130] > ]
	I1107 23:26:43.266942   29973 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1107 23:26:43.266953   29973 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1107 23:26:43.267691   29973 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1107 23:26:43.267708   29973 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1107 23:26:43.267718   29973 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1107 23:26:43.267725   29973 command_runner.go:130] > # always happen on a node reboot
	I1107 23:26:43.268584   29973 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1107 23:26:43.268600   29973 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1107 23:26:43.268610   29973 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1107 23:26:43.268631   29973 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1107 23:26:43.269339   29973 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1107 23:26:43.269356   29973 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1107 23:26:43.269369   29973 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1107 23:26:43.269905   29973 command_runner.go:130] > # internal_wipe = true
	I1107 23:26:43.269923   29973 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1107 23:26:43.269934   29973 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1107 23:26:43.269943   29973 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1107 23:26:43.270388   29973 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1107 23:26:43.270405   29973 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1107 23:26:43.270411   29973 command_runner.go:130] > [crio.api]
	I1107 23:26:43.270420   29973 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1107 23:26:43.270439   29973 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1107 23:26:43.270454   29973 command_runner.go:130] > # IP address on which the stream server will listen.
	I1107 23:26:43.270462   29973 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1107 23:26:43.270472   29973 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1107 23:26:43.270480   29973 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1107 23:26:43.270530   29973 command_runner.go:130] > # stream_port = "0"
	I1107 23:26:43.270542   29973 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1107 23:26:43.270550   29973 command_runner.go:130] > # stream_enable_tls = false
	I1107 23:26:43.270564   29973 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1107 23:26:43.270573   29973 command_runner.go:130] > # stream_idle_timeout = ""
	I1107 23:26:43.270593   29973 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1107 23:26:43.270611   29973 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1107 23:26:43.270622   29973 command_runner.go:130] > # minutes.
	I1107 23:26:43.270632   29973 command_runner.go:130] > # stream_tls_cert = ""
	I1107 23:26:43.270646   29973 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1107 23:26:43.270660   29973 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1107 23:26:43.270671   29973 command_runner.go:130] > # stream_tls_key = ""
	I1107 23:26:43.270681   29973 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1107 23:26:43.270699   29973 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1107 23:26:43.270711   29973 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1107 23:26:43.270721   29973 command_runner.go:130] > # stream_tls_ca = ""
	I1107 23:26:43.270738   29973 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:26:43.270776   29973 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1107 23:26:43.270793   29973 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:26:43.270804   29973 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1107 23:26:43.270836   29973 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1107 23:26:43.270849   29973 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1107 23:26:43.270859   29973 command_runner.go:130] > [crio.runtime]
	I1107 23:26:43.270871   29973 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1107 23:26:43.270882   29973 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1107 23:26:43.270892   29973 command_runner.go:130] > # "nofile=1024:2048"
	I1107 23:26:43.270902   29973 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1107 23:26:43.270910   29973 command_runner.go:130] > # default_ulimits = [
	I1107 23:26:43.270920   29973 command_runner.go:130] > # ]
	I1107 23:26:43.270931   29973 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1107 23:26:43.270941   29973 command_runner.go:130] > # no_pivot = false
	I1107 23:26:43.270955   29973 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1107 23:26:43.270970   29973 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1107 23:26:43.270983   29973 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1107 23:26:43.270997   29973 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1107 23:26:43.271009   29973 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1107 23:26:43.271024   29973 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:26:43.271035   29973 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1107 23:26:43.271042   29973 command_runner.go:130] > # Cgroup setting for conmon
	I1107 23:26:43.271053   29973 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1107 23:26:43.271063   29973 command_runner.go:130] > conmon_cgroup = "pod"
	I1107 23:26:43.271084   29973 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1107 23:26:43.271095   29973 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1107 23:26:43.271109   29973 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:26:43.271119   29973 command_runner.go:130] > conmon_env = [
	I1107 23:26:43.271131   29973 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1107 23:26:43.271140   29973 command_runner.go:130] > ]
	I1107 23:26:43.271148   29973 command_runner.go:130] > # Additional environment variables to set for all the
	I1107 23:26:43.271159   29973 command_runner.go:130] > # containers. These are overridden if set in the
	I1107 23:26:43.271184   29973 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1107 23:26:43.271197   29973 command_runner.go:130] > # default_env = [
	I1107 23:26:43.271233   29973 command_runner.go:130] > # ]
	I1107 23:26:43.271246   29973 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1107 23:26:43.271253   29973 command_runner.go:130] > # selinux = false
	I1107 23:26:43.271266   29973 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1107 23:26:43.271279   29973 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1107 23:26:43.271293   29973 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1107 23:26:43.271304   29973 command_runner.go:130] > # seccomp_profile = ""
	I1107 23:26:43.271317   29973 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1107 23:26:43.271331   29973 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1107 23:26:43.271342   29973 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1107 23:26:43.271354   29973 command_runner.go:130] > # which might increase security.
	I1107 23:26:43.271365   29973 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1107 23:26:43.271379   29973 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1107 23:26:43.271394   29973 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1107 23:26:43.271408   29973 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1107 23:26:43.271422   29973 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1107 23:26:43.271438   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:26:43.271449   29973 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1107 23:26:43.271458   29973 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1107 23:26:43.271468   29973 command_runner.go:130] > # the cgroup blockio controller.
	I1107 23:26:43.271475   29973 command_runner.go:130] > # blockio_config_file = ""
	I1107 23:26:43.271489   29973 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1107 23:26:43.271500   29973 command_runner.go:130] > # irqbalance daemon.
	I1107 23:26:43.271513   29973 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1107 23:26:43.271535   29973 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1107 23:26:43.271549   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:26:43.271558   29973 command_runner.go:130] > # rdt_config_file = ""
	I1107 23:26:43.271570   29973 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1107 23:26:43.271581   29973 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1107 23:26:43.271593   29973 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1107 23:26:43.271604   29973 command_runner.go:130] > # separate_pull_cgroup = ""
	I1107 23:26:43.271616   29973 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1107 23:26:43.271629   29973 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1107 23:26:43.271643   29973 command_runner.go:130] > # will be added.
	I1107 23:26:43.271658   29973 command_runner.go:130] > # default_capabilities = [
	I1107 23:26:43.271668   29973 command_runner.go:130] > # 	"CHOWN",
	I1107 23:26:43.271674   29973 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1107 23:26:43.271684   29973 command_runner.go:130] > # 	"FSETID",
	I1107 23:26:43.271690   29973 command_runner.go:130] > # 	"FOWNER",
	I1107 23:26:43.271699   29973 command_runner.go:130] > # 	"SETGID",
	I1107 23:26:43.271705   29973 command_runner.go:130] > # 	"SETUID",
	I1107 23:26:43.271715   29973 command_runner.go:130] > # 	"SETPCAP",
	I1107 23:26:43.271725   29973 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1107 23:26:43.271732   29973 command_runner.go:130] > # 	"KILL",
	I1107 23:26:43.271741   29973 command_runner.go:130] > # ]
	I1107 23:26:43.271752   29973 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1107 23:26:43.271765   29973 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:26:43.271773   29973 command_runner.go:130] > # default_sysctls = [
	I1107 23:26:43.271780   29973 command_runner.go:130] > # ]
	I1107 23:26:43.271787   29973 command_runner.go:130] > # List of devices on the host that a
	I1107 23:26:43.271806   29973 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1107 23:26:43.271815   29973 command_runner.go:130] > # allowed_devices = [
	I1107 23:26:43.271822   29973 command_runner.go:130] > # 	"/dev/fuse",
	I1107 23:26:43.271828   29973 command_runner.go:130] > # ]
	I1107 23:26:43.271834   29973 command_runner.go:130] > # List of additional devices. specified as
	I1107 23:26:43.271850   29973 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1107 23:26:43.271859   29973 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1107 23:26:43.271901   29973 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:26:43.271913   29973 command_runner.go:130] > # additional_devices = [
	I1107 23:26:43.271919   29973 command_runner.go:130] > # ]
	I1107 23:26:43.271927   29973 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1107 23:26:43.271934   29973 command_runner.go:130] > # cdi_spec_dirs = [
	I1107 23:26:43.271943   29973 command_runner.go:130] > # 	"/etc/cdi",
	I1107 23:26:43.271949   29973 command_runner.go:130] > # 	"/var/run/cdi",
	I1107 23:26:43.271957   29973 command_runner.go:130] > # ]
	I1107 23:26:43.271969   29973 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1107 23:26:43.271984   29973 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1107 23:26:43.271994   29973 command_runner.go:130] > # Defaults to false.
	I1107 23:26:43.272035   29973 command_runner.go:130] > # device_ownership_from_security_context = false
	I1107 23:26:43.272049   29973 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1107 23:26:43.272065   29973 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1107 23:26:43.272080   29973 command_runner.go:130] > # hooks_dir = [
	I1107 23:26:43.272090   29973 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1107 23:26:43.272097   29973 command_runner.go:130] > # ]
	I1107 23:26:43.272106   29973 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1107 23:26:43.272120   29973 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1107 23:26:43.272132   29973 command_runner.go:130] > # its default mounts from the following two files:
	I1107 23:26:43.272138   29973 command_runner.go:130] > #
	I1107 23:26:43.272147   29973 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1107 23:26:43.272165   29973 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1107 23:26:43.272180   29973 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1107 23:26:43.272189   29973 command_runner.go:130] > #
	I1107 23:26:43.272204   29973 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1107 23:26:43.272226   29973 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1107 23:26:43.272243   29973 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1107 23:26:43.272250   29973 command_runner.go:130] > #      only add mounts it finds in this file.
	I1107 23:26:43.272255   29973 command_runner.go:130] > #
	I1107 23:26:43.272262   29973 command_runner.go:130] > # default_mounts_file = ""
	I1107 23:26:43.272277   29973 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1107 23:26:43.272294   29973 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1107 23:26:43.272302   29973 command_runner.go:130] > pids_limit = 1024
	I1107 23:26:43.272317   29973 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1107 23:26:43.272328   29973 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1107 23:26:43.272342   29973 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1107 23:26:43.272360   29973 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1107 23:26:43.272371   29973 command_runner.go:130] > # log_size_max = -1
	I1107 23:26:43.272385   29973 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1107 23:26:43.272395   29973 command_runner.go:130] > # log_to_journald = false
	I1107 23:26:43.272406   29973 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1107 23:26:43.272429   29973 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1107 23:26:43.272437   29973 command_runner.go:130] > # Path to directory for container attach sockets.
	I1107 23:26:43.272449   29973 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1107 23:26:43.272457   29973 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1107 23:26:43.272464   29973 command_runner.go:130] > # bind_mount_prefix = ""
	I1107 23:26:43.272472   29973 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1107 23:26:43.272481   29973 command_runner.go:130] > # read_only = false
	I1107 23:26:43.272496   29973 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1107 23:26:43.272508   29973 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1107 23:26:43.272518   29973 command_runner.go:130] > # live configuration reload.
	I1107 23:26:43.272527   29973 command_runner.go:130] > # log_level = "info"
	I1107 23:26:43.272538   29973 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1107 23:26:43.272548   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:26:43.272560   29973 command_runner.go:130] > # log_filter = ""
	I1107 23:26:43.272572   29973 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1107 23:26:43.272586   29973 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1107 23:26:43.272596   29973 command_runner.go:130] > # separated by comma.
	I1107 23:26:43.272609   29973 command_runner.go:130] > # uid_mappings = ""
	I1107 23:26:43.272622   29973 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1107 23:26:43.272635   29973 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1107 23:26:43.272645   29973 command_runner.go:130] > # separated by comma.
	I1107 23:26:43.272657   29973 command_runner.go:130] > # gid_mappings = ""
	I1107 23:26:43.272669   29973 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1107 23:26:43.272687   29973 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:26:43.272698   29973 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:26:43.272709   29973 command_runner.go:130] > # minimum_mappable_uid = -1
	I1107 23:26:43.272722   29973 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1107 23:26:43.272736   29973 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:26:43.272750   29973 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:26:43.272761   29973 command_runner.go:130] > # minimum_mappable_gid = -1
	I1107 23:26:43.272774   29973 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1107 23:26:43.272787   29973 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1107 23:26:43.272800   29973 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1107 23:26:43.272810   29973 command_runner.go:130] > # ctr_stop_timeout = 30
	I1107 23:26:43.272838   29973 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1107 23:26:43.272851   29973 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1107 23:26:43.272860   29973 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1107 23:26:43.272872   29973 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1107 23:26:43.272882   29973 command_runner.go:130] > drop_infra_ctr = false
	I1107 23:26:43.272894   29973 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1107 23:26:43.272908   29973 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1107 23:26:43.272923   29973 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1107 23:26:43.272933   29973 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1107 23:26:43.272957   29973 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1107 23:26:43.273001   29973 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1107 23:26:43.273011   29973 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1107 23:26:43.273023   29973 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1107 23:26:43.273033   29973 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1107 23:26:43.273043   29973 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1107 23:26:43.273056   29973 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1107 23:26:43.273076   29973 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1107 23:26:43.273083   29973 command_runner.go:130] > # default_runtime = "runc"
	I1107 23:26:43.273091   29973 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1107 23:26:43.273105   29973 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1107 23:26:43.273120   29973 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1107 23:26:43.273131   29973 command_runner.go:130] > # creation as a file is not desired either.
	I1107 23:26:43.273144   29973 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1107 23:26:43.273156   29973 command_runner.go:130] > # the hostname is being managed dynamically.
	I1107 23:26:43.273164   29973 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1107 23:26:43.273172   29973 command_runner.go:130] > # ]
	I1107 23:26:43.273181   29973 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1107 23:26:43.273197   29973 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1107 23:26:43.273211   29973 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1107 23:26:43.273224   29973 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1107 23:26:43.273230   29973 command_runner.go:130] > #
	I1107 23:26:43.273241   29973 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1107 23:26:43.273250   29973 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1107 23:26:43.273260   29973 command_runner.go:130] > #  runtime_type = "oci"
	I1107 23:26:43.273272   29973 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1107 23:26:43.273284   29973 command_runner.go:130] > #  privileged_without_host_devices = false
	I1107 23:26:43.273293   29973 command_runner.go:130] > #  allowed_annotations = []
	I1107 23:26:43.273301   29973 command_runner.go:130] > # Where:
	I1107 23:26:43.273311   29973 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1107 23:26:43.273320   29973 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1107 23:26:43.273332   29973 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1107 23:26:43.273342   29973 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1107 23:26:43.273351   29973 command_runner.go:130] > #   in $PATH.
	I1107 23:26:43.273360   29973 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1107 23:26:43.273374   29973 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1107 23:26:43.273390   29973 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1107 23:26:43.273398   29973 command_runner.go:130] > #   state.
	I1107 23:26:43.273408   29973 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1107 23:26:43.273419   29973 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1107 23:26:43.273428   29973 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1107 23:26:43.273439   29973 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1107 23:26:43.273451   29973 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1107 23:26:43.273464   29973 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1107 23:26:43.273475   29973 command_runner.go:130] > #   The currently recognized values are:
	I1107 23:26:43.273486   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1107 23:26:43.273503   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1107 23:26:43.273515   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1107 23:26:43.273535   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1107 23:26:43.273551   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1107 23:26:43.273565   29973 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1107 23:26:43.273574   29973 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1107 23:26:43.273588   29973 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1107 23:26:43.273600   29973 command_runner.go:130] > #   should be moved to the container's cgroup
	I1107 23:26:43.273613   29973 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1107 23:26:43.273623   29973 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1107 23:26:43.273632   29973 command_runner.go:130] > runtime_type = "oci"
	I1107 23:26:43.273640   29973 command_runner.go:130] > runtime_root = "/run/runc"
	I1107 23:26:43.273650   29973 command_runner.go:130] > runtime_config_path = ""
	I1107 23:26:43.273657   29973 command_runner.go:130] > monitor_path = ""
	I1107 23:26:43.273666   29973 command_runner.go:130] > monitor_cgroup = ""
	I1107 23:26:43.273673   29973 command_runner.go:130] > monitor_exec_cgroup = ""
	I1107 23:26:43.273686   29973 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1107 23:26:43.273695   29973 command_runner.go:130] > # running containers
	I1107 23:26:43.273705   29973 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1107 23:26:43.273718   29973 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1107 23:26:43.273781   29973 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1107 23:26:43.273795   29973 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1107 23:26:43.273807   29973 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1107 23:26:43.273818   29973 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1107 23:26:43.273826   29973 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1107 23:26:43.273838   29973 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1107 23:26:43.273852   29973 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1107 23:26:43.273863   29973 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1107 23:26:43.273897   29973 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1107 23:26:43.273908   29973 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1107 23:26:43.273918   29973 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1107 23:26:43.273932   29973 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1107 23:26:43.273947   29973 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1107 23:26:43.273959   29973 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1107 23:26:43.273975   29973 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1107 23:26:43.273991   29973 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1107 23:26:43.274007   29973 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1107 23:26:43.274021   29973 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1107 23:26:43.274029   29973 command_runner.go:130] > # Example:
	I1107 23:26:43.274037   29973 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1107 23:26:43.274049   29973 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1107 23:26:43.274059   29973 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1107 23:26:43.274077   29973 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1107 23:26:43.274087   29973 command_runner.go:130] > # cpuset = 0
	I1107 23:26:43.274097   29973 command_runner.go:130] > # cpushares = "0-1"
	I1107 23:26:43.274102   29973 command_runner.go:130] > # Where:
	I1107 23:26:43.274111   29973 command_runner.go:130] > # The workload name is workload-type.
	I1107 23:26:43.274125   29973 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1107 23:26:43.274135   29973 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1107 23:26:43.274143   29973 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1107 23:26:43.274157   29973 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1107 23:26:43.274168   29973 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1107 23:26:43.274176   29973 command_runner.go:130] > # 
	I1107 23:26:43.274186   29973 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1107 23:26:43.274195   29973 command_runner.go:130] > #
	I1107 23:26:43.274205   29973 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1107 23:26:43.274219   29973 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1107 23:26:43.274232   29973 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1107 23:26:43.274246   29973 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1107 23:26:43.274258   29973 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1107 23:26:43.274267   29973 command_runner.go:130] > [crio.image]
	I1107 23:26:43.274278   29973 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1107 23:26:43.274293   29973 command_runner.go:130] > # default_transport = "docker://"
	I1107 23:26:43.274307   29973 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1107 23:26:43.274321   29973 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:26:43.274329   29973 command_runner.go:130] > # global_auth_file = ""
	I1107 23:26:43.274338   29973 command_runner.go:130] > # The image used to instantiate infra containers.
	I1107 23:26:43.274348   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:26:43.274358   29973 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1107 23:26:43.274369   29973 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1107 23:26:43.274381   29973 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:26:43.274391   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:26:43.274403   29973 command_runner.go:130] > # pause_image_auth_file = ""
	I1107 23:26:43.274414   29973 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1107 23:26:43.274424   29973 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1107 23:26:43.274434   29973 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1107 23:26:43.274444   29973 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1107 23:26:43.274450   29973 command_runner.go:130] > # pause_command = "/pause"
	I1107 23:26:43.274459   29973 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1107 23:26:43.274468   29973 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1107 23:26:43.274481   29973 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1107 23:26:43.274492   29973 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1107 23:26:43.274500   29973 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1107 23:26:43.274507   29973 command_runner.go:130] > # signature_policy = ""
	I1107 23:26:43.274517   29973 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1107 23:26:43.274528   29973 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1107 23:26:43.274535   29973 command_runner.go:130] > # changing them here.
	I1107 23:26:43.274542   29973 command_runner.go:130] > # insecure_registries = [
	I1107 23:26:43.274548   29973 command_runner.go:130] > # ]
	I1107 23:26:43.274558   29973 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1107 23:26:43.274564   29973 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1107 23:26:43.274569   29973 command_runner.go:130] > # image_volumes = "mkdir"
	I1107 23:26:43.274574   29973 command_runner.go:130] > # Temporary directory to use for storing big files
	I1107 23:26:43.274581   29973 command_runner.go:130] > # big_files_temporary_dir = ""
	I1107 23:26:43.274590   29973 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1107 23:26:43.274595   29973 command_runner.go:130] > # CNI plugins.
	I1107 23:26:43.274602   29973 command_runner.go:130] > [crio.network]
	I1107 23:26:43.274639   29973 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1107 23:26:43.274656   29973 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1107 23:26:43.274664   29973 command_runner.go:130] > # cni_default_network = ""
	I1107 23:26:43.274676   29973 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1107 23:26:43.274683   29973 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1107 23:26:43.274695   29973 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1107 23:26:43.274706   29973 command_runner.go:130] > # plugin_dirs = [
	I1107 23:26:43.274712   29973 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1107 23:26:43.274718   29973 command_runner.go:130] > # ]
	I1107 23:26:43.274731   29973 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1107 23:26:43.274740   29973 command_runner.go:130] > [crio.metrics]
	I1107 23:26:43.274748   29973 command_runner.go:130] > # Globally enable or disable metrics support.
	I1107 23:26:43.274757   29973 command_runner.go:130] > enable_metrics = true
	I1107 23:26:43.274765   29973 command_runner.go:130] > # Specify enabled metrics collectors.
	I1107 23:26:43.274776   29973 command_runner.go:130] > # Per default all metrics are enabled.
	I1107 23:26:43.274787   29973 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1107 23:26:43.274802   29973 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1107 23:26:43.274815   29973 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1107 23:26:43.274826   29973 command_runner.go:130] > # metrics_collectors = [
	I1107 23:26:43.274840   29973 command_runner.go:130] > # 	"operations",
	I1107 23:26:43.274868   29973 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1107 23:26:43.274879   29973 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1107 23:26:43.274890   29973 command_runner.go:130] > # 	"operations_errors",
	I1107 23:26:43.274897   29973 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1107 23:26:43.274907   29973 command_runner.go:130] > # 	"image_pulls_by_name",
	I1107 23:26:43.274913   29973 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1107 23:26:43.274920   29973 command_runner.go:130] > # 	"image_pulls_failures",
	I1107 23:26:43.274924   29973 command_runner.go:130] > # 	"image_pulls_successes",
	I1107 23:26:43.274930   29973 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1107 23:26:43.274934   29973 command_runner.go:130] > # 	"image_layer_reuse",
	I1107 23:26:43.274940   29973 command_runner.go:130] > # 	"containers_oom_total",
	I1107 23:26:43.274945   29973 command_runner.go:130] > # 	"containers_oom",
	I1107 23:26:43.274951   29973 command_runner.go:130] > # 	"processes_defunct",
	I1107 23:26:43.274955   29973 command_runner.go:130] > # 	"operations_total",
	I1107 23:26:43.274959   29973 command_runner.go:130] > # 	"operations_latency_seconds",
	I1107 23:26:43.274966   29973 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1107 23:26:43.274971   29973 command_runner.go:130] > # 	"operations_errors_total",
	I1107 23:26:43.274981   29973 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1107 23:26:43.274988   29973 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1107 23:26:43.274992   29973 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1107 23:26:43.274999   29973 command_runner.go:130] > # 	"image_pulls_success_total",
	I1107 23:26:43.275003   29973 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1107 23:26:43.275010   29973 command_runner.go:130] > # 	"containers_oom_count_total",
	I1107 23:26:43.275013   29973 command_runner.go:130] > # ]
	I1107 23:26:43.275019   29973 command_runner.go:130] > # The port on which the metrics server will listen.
	I1107 23:26:43.275028   29973 command_runner.go:130] > # metrics_port = 9090
	I1107 23:26:43.275041   29973 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1107 23:26:43.275051   29973 command_runner.go:130] > # metrics_socket = ""
	I1107 23:26:43.275062   29973 command_runner.go:130] > # The certificate for the secure metrics server.
	I1107 23:26:43.275080   29973 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1107 23:26:43.275093   29973 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1107 23:26:43.275104   29973 command_runner.go:130] > # certificate on any modification event.
	I1107 23:26:43.275114   29973 command_runner.go:130] > # metrics_cert = ""
	I1107 23:26:43.275129   29973 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1107 23:26:43.275140   29973 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1107 23:26:43.275155   29973 command_runner.go:130] > # metrics_key = ""
	I1107 23:26:43.275168   29973 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1107 23:26:43.275178   29973 command_runner.go:130] > [crio.tracing]
	I1107 23:26:43.275190   29973 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1107 23:26:43.275198   29973 command_runner.go:130] > # enable_tracing = false
	I1107 23:26:43.275207   29973 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1107 23:26:43.275214   29973 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1107 23:26:43.275219   29973 command_runner.go:130] > # Number of samples to collect per million spans.
	I1107 23:26:43.275227   29973 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1107 23:26:43.275233   29973 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1107 23:26:43.275239   29973 command_runner.go:130] > [crio.stats]
	I1107 23:26:43.275244   29973 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1107 23:26:43.275252   29973 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1107 23:26:43.275259   29973 command_runner.go:130] > # stats_collection_period = 0
	I1107 23:26:43.275295   29973 command_runner.go:130] ! time="2023-11-07 23:26:43.242133942Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1107 23:26:43.275309   29973 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1107 23:26:43.275385   29973 cni.go:84] Creating CNI manager for ""
	I1107 23:26:43.275394   29973 cni.go:136] 1 nodes found, recommending kindnet
	I1107 23:26:43.275417   29973 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:26:43.275436   29973 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553062 NodeName:multinode-553062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:26:43.275557   29973 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553062"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:26:43.275629   29973 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:26:43.275679   29973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:26:43.285408   29973 command_runner.go:130] > kubeadm
	I1107 23:26:43.285425   29973 command_runner.go:130] > kubectl
	I1107 23:26:43.285429   29973 command_runner.go:130] > kubelet
	I1107 23:26:43.285542   29973 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:26:43.285619   29973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:26:43.294833   29973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1107 23:26:43.311108   29973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:26:43.327126   29973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1107 23:26:43.343033   29973 ssh_runner.go:195] Run: grep 192.168.39.246	control-plane.minikube.internal$ /etc/hosts
	I1107 23:26:43.346813   29973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:26:43.357949   29973 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062 for IP: 192.168.39.246
	I1107 23:26:43.357991   29973 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:43.358165   29973 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1107 23:26:43.358227   29973 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1107 23:26:43.358288   29973 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key
	I1107 23:26:43.358302   29973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt with IP's: []
	I1107 23:26:43.538862   29973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt ...
	I1107 23:26:43.538894   29973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt: {Name:mk0c3763372911c172c07fcc2b347dd151766045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:43.539045   29973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key ...
	I1107 23:26:43.539055   29973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key: {Name:mk0fa4fb6754d83767b64614316581d3b6b82e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:43.539124   29973 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key.4f23f264
	I1107 23:26:43.539137   29973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.crt.4f23f264 with IP's: [192.168.39.246 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:26:43.703063   29973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.crt.4f23f264 ...
	I1107 23:26:43.703091   29973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.crt.4f23f264: {Name:mk75320fe0c67ecb2dffe7339715de8d2e55c343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:43.703262   29973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key.4f23f264 ...
	I1107 23:26:43.703277   29973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key.4f23f264: {Name:mk3d98990dcdf32b43d4789d634a26e6bc06e26b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:43.703344   29973 certs.go:337] copying /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.crt.4f23f264 -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.crt
	I1107 23:26:43.703427   29973 certs.go:341] copying /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key.4f23f264 -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key
	I1107 23:26:43.703482   29973 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.key
	I1107 23:26:43.703495   29973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.crt with IP's: []
	I1107 23:26:43.765494   29973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.crt ...
	I1107 23:26:43.765523   29973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.crt: {Name:mk813e3ee4f7a3fb67277796b02f1a89bcd7cd95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:43.765667   29973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.key ...
	I1107 23:26:43.765679   29973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.key: {Name:mkcc8897a67e5dc5185ec3f92cc60cd42473e765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:26:43.765766   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 23:26:43.765793   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 23:26:43.765808   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 23:26:43.765835   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 23:26:43.765848   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:26:43.765864   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:26:43.765876   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:26:43.765888   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:26:43.765938   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1107 23:26:43.765969   29973 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1107 23:26:43.765983   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:26:43.766028   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:26:43.766061   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:26:43.766082   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1107 23:26:43.766118   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:26:43.766144   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /usr/share/ca-certificates/168482.pem
	I1107 23:26:43.766159   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:26:43.766174   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem -> /usr/share/ca-certificates/16848.pem
	I1107 23:26:43.766692   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:26:43.796237   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:26:43.818563   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:26:43.841275   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 23:26:43.864153   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:26:43.887768   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:26:43.909872   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:26:43.931918   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 23:26:43.954070   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1107 23:26:43.977800   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:26:44.000038   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1107 23:26:44.022572   29973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:26:44.038333   29973 ssh_runner.go:195] Run: openssl version
	I1107 23:26:44.043634   29973 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1107 23:26:44.043695   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1107 23:26:44.052881   29973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1107 23:26:44.057399   29973 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:26:44.057426   29973 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:26:44.057465   29973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1107 23:26:44.062727   29973 command_runner.go:130] > 3ec20f2e
	I1107 23:26:44.063101   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:26:44.072869   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:26:44.082639   29973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:26:44.086833   29973 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:26:44.086947   29973 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:26:44.086992   29973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:26:44.092196   29973 command_runner.go:130] > b5213941
	I1107 23:26:44.092540   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:26:44.102575   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1107 23:26:44.112613   29973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1107 23:26:44.116803   29973 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:26:44.116913   29973 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:26:44.116962   29973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1107 23:26:44.122600   29973 command_runner.go:130] > 51391683
	I1107 23:26:44.122664   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1107 23:26:44.133695   29973 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:26:44.137758   29973 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:26:44.137811   29973 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:26:44.137848   29973 kubeadm.go:404] StartCluster: {Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:26:44.137919   29973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:26:44.137969   29973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:26:44.181567   29973 cri.go:89] found id: ""
	I1107 23:26:44.181643   29973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:26:44.191144   29973 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1107 23:26:44.191174   29973 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1107 23:26:44.191185   29973 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1107 23:26:44.191263   29973 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:26:44.200576   29973 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:26:44.210685   29973 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1107 23:26:44.210710   29973 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1107 23:26:44.210729   29973 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1107 23:26:44.210744   29973 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:26:44.211005   29973 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:26:44.211034   29973 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1107 23:26:44.538534   29973 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:26:44.538575   29973 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:26:57.190920   29973 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1107 23:26:57.190952   29973 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1107 23:26:57.191012   29973 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:26:57.191021   29973 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 23:26:57.191131   29973 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:26:57.191147   29973 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:26:57.191244   29973 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:26:57.191253   29973 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:26:57.191419   29973 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:26:57.191432   29973 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:26:57.191482   29973 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:26:57.191497   29973 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:26:57.193190   29973 out.go:204]   - Generating certificates and keys ...
	I1107 23:26:57.193283   29973 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1107 23:26:57.193295   29973 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:26:57.193379   29973 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1107 23:26:57.193390   29973 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:26:57.193493   29973 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:26:57.193512   29973 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:26:57.193607   29973 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:26:57.193628   29973 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:26:57.193726   29973 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1107 23:26:57.193729   29973 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:26:57.193806   29973 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1107 23:26:57.193817   29973 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:26:57.193886   29973 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1107 23:26:57.193898   29973 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:26:57.194056   29973 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-553062] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1107 23:26:57.194067   29973 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-553062] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1107 23:26:57.194137   29973 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1107 23:26:57.194157   29973 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:26:57.194322   29973 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-553062] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1107 23:26:57.194328   29973 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-553062] and IPs [192.168.39.246 127.0.0.1 ::1]
	I1107 23:26:57.194428   29973 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:26:57.194438   29973 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:26:57.194528   29973 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:26:57.194537   29973 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:26:57.194585   29973 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1107 23:26:57.194595   29973 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:26:57.194676   29973 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:26:57.194685   29973 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:26:57.194742   29973 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:26:57.194760   29973 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:26:57.194848   29973 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:26:57.194858   29973 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:26:57.194939   29973 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:26:57.194948   29973 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:26:57.195023   29973 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:26:57.195032   29973 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:26:57.195134   29973 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:26:57.195157   29973 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:26:57.195243   29973 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:26:57.195251   29973 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:26:57.196935   29973 out.go:204]   - Booting up control plane ...
	I1107 23:26:57.197030   29973 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:26:57.197045   29973 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:26:57.197110   29973 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:26:57.197118   29973 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:26:57.197166   29973 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:26:57.197171   29973 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:26:57.197246   29973 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:26:57.197257   29973 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:26:57.197323   29973 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:26:57.197329   29973 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:26:57.197358   29973 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 23:26:57.197363   29973 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:26:57.197528   29973 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:26:57.197535   29973 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:26:57.197622   29973 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002470 seconds
	I1107 23:26:57.197631   29973 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002470 seconds
	I1107 23:26:57.197747   29973 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:26:57.197755   29973 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:26:57.197891   29973 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:26:57.197899   29973 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:26:57.197968   29973 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:26:57.197976   29973 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:26:57.198112   29973 command_runner.go:130] > [mark-control-plane] Marking the node multinode-553062 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:26:57.198118   29973 kubeadm.go:322] [mark-control-plane] Marking the node multinode-553062 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:26:57.198163   29973 command_runner.go:130] > [bootstrap-token] Using token: cppz28.ou3t1euyw3tj26w1
	I1107 23:26:57.198168   29973 kubeadm.go:322] [bootstrap-token] Using token: cppz28.ou3t1euyw3tj26w1
	I1107 23:26:57.200548   29973 out.go:204]   - Configuring RBAC rules ...
	I1107 23:26:57.200647   29973 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:26:57.200659   29973 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:26:57.200739   29973 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:26:57.200749   29973 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:26:57.200905   29973 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:26:57.200924   29973 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:26:57.201106   29973 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:26:57.201116   29973 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:26:57.201244   29973 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:26:57.201254   29973 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:26:57.201367   29973 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:26:57.201375   29973 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:26:57.201539   29973 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:26:57.201558   29973 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:26:57.201613   29973 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1107 23:26:57.201623   29973 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:26:57.201688   29973 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1107 23:26:57.201698   29973 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:26:57.201704   29973 kubeadm.go:322] 
	I1107 23:26:57.201787   29973 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1107 23:26:57.201797   29973 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:26:57.201803   29973 kubeadm.go:322] 
	I1107 23:26:57.201908   29973 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1107 23:26:57.201926   29973 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:26:57.201936   29973 kubeadm.go:322] 
	I1107 23:26:57.201983   29973 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1107 23:26:57.201992   29973 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:26:57.202071   29973 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:26:57.202084   29973 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:26:57.202153   29973 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:26:57.202162   29973 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:26:57.202168   29973 kubeadm.go:322] 
	I1107 23:26:57.202241   29973 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1107 23:26:57.202249   29973 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1107 23:26:57.202259   29973 kubeadm.go:322] 
	I1107 23:26:57.202339   29973 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:26:57.202354   29973 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:26:57.202386   29973 kubeadm.go:322] 
	I1107 23:26:57.202456   29973 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1107 23:26:57.202466   29973 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:26:57.202570   29973 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:26:57.202579   29973 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:26:57.202683   29973 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:26:57.202699   29973 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:26:57.202709   29973 kubeadm.go:322] 
	I1107 23:26:57.202837   29973 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:26:57.202847   29973 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:26:57.202955   29973 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1107 23:26:57.202960   29973 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:26:57.202970   29973 kubeadm.go:322] 
	I1107 23:26:57.203071   29973 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token cppz28.ou3t1euyw3tj26w1 \
	I1107 23:26:57.203079   29973 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cppz28.ou3t1euyw3tj26w1 \
	I1107 23:26:57.203190   29973 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1107 23:26:57.203199   29973 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1107 23:26:57.203226   29973 command_runner.go:130] > 	--control-plane 
	I1107 23:26:57.203235   29973 kubeadm.go:322] 	--control-plane 
	I1107 23:26:57.203242   29973 kubeadm.go:322] 
	I1107 23:26:57.203346   29973 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:26:57.203359   29973 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:26:57.203365   29973 kubeadm.go:322] 
	I1107 23:26:57.203461   29973 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token cppz28.ou3t1euyw3tj26w1 \
	I1107 23:26:57.203469   29973 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cppz28.ou3t1euyw3tj26w1 \
	I1107 23:26:57.203595   29973 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1107 23:26:57.203617   29973 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1107 23:26:57.203629   29973 cni.go:84] Creating CNI manager for ""
	I1107 23:26:57.203638   29973 cni.go:136] 1 nodes found, recommending kindnet
	I1107 23:26:57.205245   29973 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:26:57.206498   29973 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:26:57.225627   29973 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 23:26:57.225647   29973 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1107 23:26:57.225656   29973 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1107 23:26:57.225667   29973 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:26:57.225693   29973 command_runner.go:130] > Access: 2023-11-07 23:26:24.870432823 +0000
	I1107 23:26:57.225707   29973 command_runner.go:130] > Modify: 2023-11-07 07:42:50.000000000 +0000
	I1107 23:26:57.225713   29973 command_runner.go:130] > Change: 2023-11-07 23:26:23.025432823 +0000
	I1107 23:26:57.225717   29973 command_runner.go:130] >  Birth: -
	I1107 23:26:57.227041   29973 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:26:57.227059   29973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:26:57.267719   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:26:58.155478   29973 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1107 23:26:58.155530   29973 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1107 23:26:58.155539   29973 command_runner.go:130] > serviceaccount/kindnet created
	I1107 23:26:58.155546   29973 command_runner.go:130] > daemonset.apps/kindnet created
	I1107 23:26:58.155596   29973 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:26:58.155700   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:26:58.155748   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=multinode-553062 minikube.k8s.io/updated_at=2023_11_07T23_26_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:26:58.359246   29973 command_runner.go:130] > node/multinode-553062 labeled
	I1107 23:26:58.360679   29973 command_runner.go:130] > -16
	I1107 23:26:58.360699   29973 ops.go:34] apiserver oom_adj: -16
	I1107 23:26:58.360721   29973 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1107 23:26:58.360807   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:26:58.454382   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:26:58.455943   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:26:58.541257   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:26:59.042066   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:26:59.124129   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:26:59.542006   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:26:59.629174   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:00.041657   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:00.118603   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:00.542465   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:00.642109   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:01.041950   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:01.119788   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:01.541431   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:01.631197   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:02.041989   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:02.127496   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:02.542155   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:02.628277   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:03.041913   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:03.122869   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:03.542493   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:03.632507   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:04.041516   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:04.121348   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:04.542338   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:04.633042   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:05.042162   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:05.125249   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:05.541770   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:05.627754   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:06.042076   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:06.166856   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:06.542042   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:06.636255   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:07.041498   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:07.127165   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:07.541439   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:07.641163   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:08.041645   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:08.142701   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:08.541573   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:08.647748   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:09.041962   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:09.132925   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:09.541516   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:09.672599   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:10.042268   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:10.132563   29973 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:27:10.541626   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:27:10.701768   29973 command_runner.go:130] > NAME      SECRETS   AGE
	I1107 23:27:10.701792   29973 command_runner.go:130] > default   0         0s
	I1107 23:27:10.701814   29973 kubeadm.go:1081] duration metric: took 12.546164545s to wait for elevateKubeSystemPrivileges.
	I1107 23:27:10.701829   29973 kubeadm.go:406] StartCluster complete in 26.563984116s
	I1107 23:27:10.701882   29973 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:27:10.701966   29973 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:27:10.702663   29973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:27:10.702874   29973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:27:10.702966   29973 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:27:10.703041   29973 addons.go:69] Setting storage-provisioner=true in profile "multinode-553062"
	I1107 23:27:10.703054   29973 addons.go:69] Setting default-storageclass=true in profile "multinode-553062"
	I1107 23:27:10.703067   29973 addons.go:231] Setting addon storage-provisioner=true in "multinode-553062"
	I1107 23:27:10.703074   29973 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-553062"
	I1107 23:27:10.703127   29973 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:27:10.703144   29973 host.go:66] Checking if "multinode-553062" exists ...
	I1107 23:27:10.703172   29973 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:27:10.704018   29973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:27:10.704061   29973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:27:10.704010   29973 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:27:10.704211   29973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:27:10.704252   29973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:27:10.704917   29973 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 23:27:10.705746   29973 round_trippers.go:463] GET https://192.168.39.246:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:27:10.705764   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:10.705775   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:10.705785   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:10.717806   29973 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1107 23:27:10.717830   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:10.717840   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:10.717848   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:10.717855   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:10.717863   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:10.717875   29973 round_trippers.go:580]     Content-Length: 291
	I1107 23:27:10.717886   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:10 GMT
	I1107 23:27:10.717896   29973 round_trippers.go:580]     Audit-Id: 1cde3f3f-205a-401a-b45e-500250cabac1
	I1107 23:27:10.718033   29973 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"99a4298f-5274-4bac-956d-86f8091a0b82","resourceVersion":"352","creationTimestamp":"2023-11-07T23:26:57Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1107 23:27:10.718600   29973 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"99a4298f-5274-4bac-956d-86f8091a0b82","resourceVersion":"352","creationTimestamp":"2023-11-07T23:26:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1107 23:27:10.718660   29973 round_trippers.go:463] PUT https://192.168.39.246:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:27:10.718673   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:10.718683   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:10.718695   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:10.718705   29973 round_trippers.go:473]     Content-Type: application/json
	I1107 23:27:10.719941   29973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I1107 23:27:10.720368   29973 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:27:10.720914   29973 main.go:141] libmachine: Using API Version  1
	I1107 23:27:10.720942   29973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:27:10.721270   29973 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:27:10.721475   29973 main.go:141] libmachine: (multinode-553062) Calling .GetState
	I1107 23:27:10.723213   29973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I1107 23:27:10.723460   29973 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:27:10.723554   29973 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:27:10.723787   29973 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:27:10.724027   29973 main.go:141] libmachine: Using API Version  1
	I1107 23:27:10.724054   29973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:27:10.724133   29973 addons.go:231] Setting addon default-storageclass=true in "multinode-553062"
	I1107 23:27:10.724170   29973 host.go:66] Checking if "multinode-553062" exists ...
	I1107 23:27:10.724358   29973 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:27:10.724583   29973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:27:10.724625   29973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:27:10.724879   29973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:27:10.724906   29973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:27:10.733975   29973 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1107 23:27:10.734000   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:10.734011   29973 round_trippers.go:580]     Audit-Id: faf54e6b-eb76-4e34-8da2-8b7afd539b41
	I1107 23:27:10.734020   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:10.734033   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:10.734044   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:10.734056   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:10.734068   29973 round_trippers.go:580]     Content-Length: 291
	I1107 23:27:10.734077   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:10 GMT
	I1107 23:27:10.734105   29973 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"99a4298f-5274-4bac-956d-86f8091a0b82","resourceVersion":"353","creationTimestamp":"2023-11-07T23:26:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1107 23:27:10.734250   29973 round_trippers.go:463] GET https://192.168.39.246:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:27:10.734265   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:10.734275   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:10.734290   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:10.738723   29973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33023
	I1107 23:27:10.738796   29973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41193
	I1107 23:27:10.739100   29973 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:27:10.739209   29973 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:27:10.739570   29973 main.go:141] libmachine: Using API Version  1
	I1107 23:27:10.739590   29973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:27:10.739718   29973 main.go:141] libmachine: Using API Version  1
	I1107 23:27:10.739741   29973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:27:10.739890   29973 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:27:10.740067   29973 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:27:10.740230   29973 main.go:141] libmachine: (multinode-553062) Calling .GetState
	I1107 23:27:10.740431   29973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:27:10.740477   29973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:27:10.741791   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:27:10.744474   29973 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:27:10.745997   29973 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:27:10.746017   29973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:27:10.746035   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:27:10.747742   29973 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1107 23:27:10.747763   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:10.747774   29973 round_trippers.go:580]     Audit-Id: 9a365eec-d84d-49d4-9141-35034cbca6aa
	I1107 23:27:10.747783   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:10.747791   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:10.747803   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:10.747811   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:10.747820   29973 round_trippers.go:580]     Content-Length: 291
	I1107 23:27:10.747828   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:10 GMT
	I1107 23:27:10.749219   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:27:10.749673   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:27:10.749701   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:27:10.749895   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:27:10.750083   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:27:10.750245   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:27:10.750390   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:27:10.754288   29973 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"99a4298f-5274-4bac-956d-86f8091a0b82","resourceVersion":"353","creationTimestamp":"2023-11-07T23:26:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1107 23:27:10.754424   29973 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553062" context rescaled to 1 replicas
	I1107 23:27:10.754459   29973 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:27:10.756376   29973 out.go:177] * Verifying Kubernetes components...
	I1107 23:27:10.757816   29973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:27:10.755721   29973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I1107 23:27:10.758195   29973 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:27:10.758572   29973 main.go:141] libmachine: Using API Version  1
	I1107 23:27:10.758585   29973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:27:10.758894   29973 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:27:10.759084   29973 main.go:141] libmachine: (multinode-553062) Calling .GetState
	I1107 23:27:10.760435   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:27:10.760655   29973 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:27:10.760671   29973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:27:10.760686   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:27:10.763889   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:27:10.764351   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:27:10.764385   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:27:10.764547   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:27:10.764701   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:27:10.764909   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:27:10.765032   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:27:10.913113   29973 command_runner.go:130] > apiVersion: v1
	I1107 23:27:10.913134   29973 command_runner.go:130] > data:
	I1107 23:27:10.913141   29973 command_runner.go:130] >   Corefile: |
	I1107 23:27:10.913146   29973 command_runner.go:130] >     .:53 {
	I1107 23:27:10.913152   29973 command_runner.go:130] >         errors
	I1107 23:27:10.913159   29973 command_runner.go:130] >         health {
	I1107 23:27:10.913166   29973 command_runner.go:130] >            lameduck 5s
	I1107 23:27:10.913172   29973 command_runner.go:130] >         }
	I1107 23:27:10.913179   29973 command_runner.go:130] >         ready
	I1107 23:27:10.913190   29973 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1107 23:27:10.913205   29973 command_runner.go:130] >            pods insecure
	I1107 23:27:10.913218   29973 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1107 23:27:10.913238   29973 command_runner.go:130] >            ttl 30
	I1107 23:27:10.913247   29973 command_runner.go:130] >         }
	I1107 23:27:10.913256   29973 command_runner.go:130] >         prometheus :9153
	I1107 23:27:10.913267   29973 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1107 23:27:10.913281   29973 command_runner.go:130] >            max_concurrent 1000
	I1107 23:27:10.913287   29973 command_runner.go:130] >         }
	I1107 23:27:10.913294   29973 command_runner.go:130] >         cache 30
	I1107 23:27:10.913300   29973 command_runner.go:130] >         loop
	I1107 23:27:10.913305   29973 command_runner.go:130] >         reload
	I1107 23:27:10.913308   29973 command_runner.go:130] >         loadbalance
	I1107 23:27:10.913312   29973 command_runner.go:130] >     }
	I1107 23:27:10.913316   29973 command_runner.go:130] > kind: ConfigMap
	I1107 23:27:10.913320   29973 command_runner.go:130] > metadata:
	I1107 23:27:10.913326   29973 command_runner.go:130] >   creationTimestamp: "2023-11-07T23:26:56Z"
	I1107 23:27:10.913331   29973 command_runner.go:130] >   name: coredns
	I1107 23:27:10.913335   29973 command_runner.go:130] >   namespace: kube-system
	I1107 23:27:10.913339   29973 command_runner.go:130] >   resourceVersion: "220"
	I1107 23:27:10.913345   29973 command_runner.go:130] >   uid: f4ddf0dd-b180-495a-83b0-8d6d546a8bca
	I1107 23:27:10.914831   29973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:27:10.915094   29973 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:27:10.915316   29973 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:27:10.915535   29973 node_ready.go:35] waiting up to 6m0s for node "multinode-553062" to be "Ready" ...
	I1107 23:27:10.915612   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:10.915619   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:10.915627   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:10.915634   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:10.921300   29973 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 23:27:10.921320   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:10.921330   29973 round_trippers.go:580]     Audit-Id: fb5033e6-ab62-4a14-ae2b-29838c1387f4
	I1107 23:27:10.921339   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:10.921352   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:10.921365   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:10.921376   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:10.921386   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:10 GMT
	I1107 23:27:10.921501   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:10.922041   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:10.922057   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:10.922067   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:10.922077   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:10.928642   29973 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 23:27:10.928659   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:10.928665   29973 round_trippers.go:580]     Audit-Id: 98cff495-9445-4a0d-987a-cdb43cac897c
	I1107 23:27:10.928670   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:10.928678   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:10.928686   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:10.928702   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:10.928708   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:10 GMT
	I1107 23:27:10.929314   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:10.958618   29973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:27:11.039613   29973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:27:11.430298   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:11.430322   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:11.430330   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:11.430336   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:11.432798   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:11.432832   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:11.432844   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:11.432853   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:11.432861   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:11.432873   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:11.432879   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:11 GMT
	I1107 23:27:11.432884   29973 round_trippers.go:580]     Audit-Id: 01105934-3222-4d14-87e4-3428aa263e19
	I1107 23:27:11.433178   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:11.617712   29973 command_runner.go:130] > configmap/coredns replaced
	I1107 23:27:11.617749   29973 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1107 23:27:11.733995   29973 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1107 23:27:11.738761   29973 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1107 23:27:11.750415   29973 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1107 23:27:11.758762   29973 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1107 23:27:11.766217   29973 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1107 23:27:11.778448   29973 command_runner.go:130] > pod/storage-provisioner created
	I1107 23:27:11.780967   29973 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1107 23:27:11.780980   29973 main.go:141] libmachine: Making call to close driver server
	I1107 23:27:11.780995   29973 main.go:141] libmachine: (multinode-553062) Calling .Close
	I1107 23:27:11.780995   29973 main.go:141] libmachine: Making call to close driver server
	I1107 23:27:11.781076   29973 main.go:141] libmachine: (multinode-553062) Calling .Close
	I1107 23:27:11.781374   29973 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:27:11.781393   29973 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:27:11.781403   29973 main.go:141] libmachine: Making call to close driver server
	I1107 23:27:11.781413   29973 main.go:141] libmachine: (multinode-553062) DBG | Closing plugin on server side
	I1107 23:27:11.781438   29973 main.go:141] libmachine: (multinode-553062) Calling .Close
	I1107 23:27:11.781443   29973 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:27:11.781466   29973 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:27:11.781476   29973 main.go:141] libmachine: Making call to close driver server
	I1107 23:27:11.781489   29973 main.go:141] libmachine: (multinode-553062) Calling .Close
	I1107 23:27:11.781651   29973 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:27:11.781728   29973 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:27:11.781767   29973 main.go:141] libmachine: (multinode-553062) DBG | Closing plugin on server side
	I1107 23:27:11.781783   29973 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:27:11.781791   29973 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:27:11.781937   29973 round_trippers.go:463] GET https://192.168.39.246:8443/apis/storage.k8s.io/v1/storageclasses
	I1107 23:27:11.781950   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:11.781961   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:11.781978   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:11.790366   29973 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1107 23:27:11.790394   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:11.790404   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:11.790412   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:11.790419   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:11.790428   29973 round_trippers.go:580]     Content-Length: 1273
	I1107 23:27:11.790434   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:11 GMT
	I1107 23:27:11.790441   29973 round_trippers.go:580]     Audit-Id: 448168b0-3335-4152-9d54-778d791b4d2a
	I1107 23:27:11.790450   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:11.790529   29973 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"373"},"items":[{"metadata":{"name":"standard","uid":"6cb5059e-ae98-4603-842b-c1d5af858c7f","resourceVersion":"364","creationTimestamp":"2023-11-07T23:27:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-07T23:27:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1107 23:27:11.791052   29973 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6cb5059e-ae98-4603-842b-c1d5af858c7f","resourceVersion":"364","creationTimestamp":"2023-11-07T23:27:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-07T23:27:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 23:27:11.791116   29973 round_trippers.go:463] PUT https://192.168.39.246:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1107 23:27:11.791132   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:11.791143   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:11.791153   29973 round_trippers.go:473]     Content-Type: application/json
	I1107 23:27:11.791168   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:11.794606   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:11.794620   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:11.794627   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:11.794637   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:11.794642   29973 round_trippers.go:580]     Content-Length: 1220
	I1107 23:27:11.794647   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:11 GMT
	I1107 23:27:11.794652   29973 round_trippers.go:580]     Audit-Id: 3b675f5c-9514-4ff3-8f6f-2e179734d82a
	I1107 23:27:11.794660   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:11.794668   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:11.794720   29973 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6cb5059e-ae98-4603-842b-c1d5af858c7f","resourceVersion":"364","creationTimestamp":"2023-11-07T23:27:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-07T23:27:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 23:27:11.794888   29973 main.go:141] libmachine: Making call to close driver server
	I1107 23:27:11.794908   29973 main.go:141] libmachine: (multinode-553062) Calling .Close
	I1107 23:27:11.795173   29973 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:27:11.795182   29973 main.go:141] libmachine: (multinode-553062) DBG | Closing plugin on server side
	I1107 23:27:11.795193   29973 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:27:11.797095   29973 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 23:27:11.798524   29973 addons.go:502] enable addons completed in 1.095567257s: enabled=[storage-provisioner default-storageclass]
	I1107 23:27:11.930004   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:11.930025   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:11.930033   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:11.930039   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:11.932435   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:11.932457   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:11.932466   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:11.932479   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:11.932486   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:11.932494   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:11.932501   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:11 GMT
	I1107 23:27:11.932508   29973 round_trippers.go:580]     Audit-Id: 5b6cc29a-ad69-4c9b-b71c-e2383887987c
	I1107 23:27:11.932715   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:12.430000   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:12.430025   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:12.430033   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:12.430039   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:12.433279   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:12.433297   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:12.433303   29973 round_trippers.go:580]     Audit-Id: 4f4e20e8-5bc9-403d-a953-aeca86653508
	I1107 23:27:12.433309   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:12.433314   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:12.433321   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:12.433329   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:12.433338   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:12 GMT
	I1107 23:27:12.433613   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:12.930224   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:12.930249   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:12.930257   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:12.930264   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:12.933174   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:12.933197   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:12.933208   29973 round_trippers.go:580]     Audit-Id: 13353a2b-bfb6-4078-8411-794e75f39c5b
	I1107 23:27:12.933214   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:12.933219   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:12.933224   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:12.933230   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:12.933238   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:12 GMT
	I1107 23:27:12.933391   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:12.933701   29973 node_ready.go:58] node "multinode-553062" has status "Ready":"False"
	I1107 23:27:13.430028   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:13.430053   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:13.430061   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:13.430067   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:13.433046   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:13.433063   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:13.433078   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:13 GMT
	I1107 23:27:13.433083   29973 round_trippers.go:580]     Audit-Id: 5589a4aa-0b90-4530-92b4-a6195bb7f7c6
	I1107 23:27:13.433088   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:13.433093   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:13.433098   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:13.433104   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:13.433309   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:13.929933   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:13.929960   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:13.929968   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:13.929978   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:13.933147   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:13.933169   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:13.933180   29973 round_trippers.go:580]     Audit-Id: 64d7466c-05c1-4455-b0d7-f5f8dfbdcca3
	I1107 23:27:13.933189   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:13.933195   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:13.933200   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:13.933204   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:13.933210   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:13 GMT
	I1107 23:27:13.933649   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:14.430358   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:14.430380   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:14.430388   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:14.430393   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:14.432880   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:14.432897   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:14.432904   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:14.432909   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:14.432914   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:14.432919   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:14 GMT
	I1107 23:27:14.432925   29973 round_trippers.go:580]     Audit-Id: ce2e04b7-df17-49db-a2eb-11243654d228
	I1107 23:27:14.432933   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:14.433127   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:14.929762   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:14.929787   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:14.929794   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:14.929801   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:14.932275   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:14.932298   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:14.932308   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:14.932316   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:14.932325   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:14 GMT
	I1107 23:27:14.932337   29973 round_trippers.go:580]     Audit-Id: 3207a3fc-86d6-411d-8034-37cb904d1ee3
	I1107 23:27:14.932348   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:14.932368   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:14.933019   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:15.430761   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:15.430782   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:15.430790   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:15.430796   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:15.433383   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:15.433402   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:15.433408   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:15.433413   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:15.433419   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:15 GMT
	I1107 23:27:15.433424   29973 round_trippers.go:580]     Audit-Id: 73586e93-1ba4-4aaa-90d8-df3d023f1617
	I1107 23:27:15.433428   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:15.433434   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:15.433789   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"326","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1107 23:27:15.434131   29973 node_ready.go:58] node "multinode-553062" has status "Ready":"False"
	I1107 23:27:15.929802   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:15.929824   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:15.929832   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:15.929838   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:15.941642   29973 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1107 23:27:15.941667   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:15.941674   29973 round_trippers.go:580]     Audit-Id: 02214a4a-7601-42ac-8024-a1d1ffd04bfe
	I1107 23:27:15.941679   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:15.941684   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:15.941689   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:15.941694   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:15.941699   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:15 GMT
	I1107 23:27:15.941845   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:15.942136   29973 node_ready.go:49] node "multinode-553062" has status "Ready":"True"
	I1107 23:27:15.942151   29973 node_ready.go:38] duration metric: took 5.026593325s waiting for node "multinode-553062" to be "Ready" ...
	I1107 23:27:15.942159   29973 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:27:15.942240   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:27:15.942248   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:15.942257   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:15.942263   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:15.954105   29973 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1107 23:27:15.954123   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:15.954133   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:15.954141   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:15.954149   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:15.954157   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:15 GMT
	I1107 23:27:15.954165   29973 round_trippers.go:580]     Audit-Id: 1fe98e49-28e7-4954-a68a-6ef5f653acb9
	I1107 23:27:15.954173   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:15.955326   29973 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"395"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"394","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54817 chars]
	I1107 23:27:15.958252   29973 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:15.958319   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:27:15.958328   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:15.958335   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:15.958341   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:15.967368   29973 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1107 23:27:15.967386   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:15.967393   29973 round_trippers.go:580]     Audit-Id: b8fb92ef-67ed-4894-b0ef-3a871f9ebc62
	I1107 23:27:15.967399   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:15.967404   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:15.967409   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:15.967414   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:15.967420   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:15 GMT
	I1107 23:27:15.967755   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"394","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1107 23:27:15.968188   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:15.968201   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:15.968208   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:15.968214   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:15.970096   29973 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:27:15.970111   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:15.970119   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:15.970124   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:15.970129   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:15.970134   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:15 GMT
	I1107 23:27:15.970139   29973 round_trippers.go:580]     Audit-Id: 3e0bab93-1a2a-41fa-960c-58c202c0f508
	I1107 23:27:15.970144   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:15.970289   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:15.970598   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:27:15.970610   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:15.970617   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:15.970623   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:15.973666   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:15.973683   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:15.973689   29973 round_trippers.go:580]     Audit-Id: b29b61da-b0e2-410e-84ed-a8af42df12d3
	I1107 23:27:15.973694   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:15.973699   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:15.973704   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:15.973709   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:15.973714   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:15 GMT
	I1107 23:27:15.973819   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"394","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1107 23:27:15.974163   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:15.974174   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:15.974181   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:15.974187   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:15.976037   29973 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:27:15.976051   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:15.976057   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:15 GMT
	I1107 23:27:15.976062   29973 round_trippers.go:580]     Audit-Id: 3fd854f2-d080-42c6-adbd-c6d85fd85988
	I1107 23:27:15.976067   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:15.976072   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:15.976077   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:15.976084   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:15.976200   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:16.476618   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:27:16.476640   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:16.476648   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:16.476653   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:16.479753   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:16.479773   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:16.479781   29973 round_trippers.go:580]     Audit-Id: 91198be4-33ff-48aa-93c9-91f9039d4e22
	I1107 23:27:16.479787   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:16.479792   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:16.479797   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:16.479804   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:16.479817   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:16 GMT
	I1107 23:27:16.480147   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"394","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1107 23:27:16.480545   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:16.480557   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:16.480563   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:16.480575   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:16.483078   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:16.483094   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:16.483100   29973 round_trippers.go:580]     Audit-Id: f92663f0-e6cc-4ce1-a4b1-ab6287ae4469
	I1107 23:27:16.483105   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:16.483110   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:16.483115   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:16.483120   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:16.483128   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:16 GMT
	I1107 23:27:16.483278   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:16.976905   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:27:16.976926   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:16.976937   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:16.976946   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:16.979681   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:16.979700   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:16.979706   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:16.979712   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:16.979721   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:16.979728   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:16.979737   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:16 GMT
	I1107 23:27:16.979752   29973 round_trippers.go:580]     Audit-Id: 2aad7334-c6f2-4ac3-ae4a-6913bbd6d48d
	I1107 23:27:16.980135   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"394","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1107 23:27:16.980584   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:16.980604   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:16.980616   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:16.980626   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:16.982895   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:16.982913   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:16.982922   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:16.982929   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:16 GMT
	I1107 23:27:16.982938   29973 round_trippers.go:580]     Audit-Id: 72ffbeb6-02dc-411c-985a-282337ece325
	I1107 23:27:16.982945   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:16.982958   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:16.982966   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:16.983189   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:17.476835   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:27:17.476864   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.476873   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.476879   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.479862   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:17.479880   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.479887   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.479892   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.479897   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.479902   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.479907   29973 round_trippers.go:580]     Audit-Id: f9ae131e-a7cd-4d81-92a4-7b8ce93ca9d6
	I1107 23:27:17.479912   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.480130   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"411","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1107 23:27:17.480655   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:17.480670   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.480678   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.480683   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.483089   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:17.483109   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.483119   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.483126   29973 round_trippers.go:580]     Audit-Id: 2ca58a3d-c001-4b93-993d-b0ff477e50d3
	I1107 23:27:17.483132   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.483139   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.483144   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.483157   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.483304   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:17.483644   29973 pod_ready.go:92] pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace has status "Ready":"True"
	I1107 23:27:17.483665   29973 pod_ready.go:81] duration metric: took 1.525392321s waiting for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:17.483677   29973 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:17.483741   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553062
	I1107 23:27:17.483752   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.483762   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.483774   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.485761   29973 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:27:17.485780   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.485789   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.485796   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.485806   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.485813   29973 round_trippers.go:580]     Audit-Id: 1c6c5c48-09cf-4604-a101-46bd830adc00
	I1107 23:27:17.485828   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.485836   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.485991   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553062","namespace":"kube-system","uid":"3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1","resourceVersion":"405","creationTimestamp":"2023-11-07T23:26:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.246:2379","kubernetes.io/config.hash":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.mirror":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.seen":"2023-11-07T23:26:48.362630200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1107 23:27:17.486349   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:17.486362   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.486369   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.486375   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.488921   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:17.488940   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.488950   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.488970   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.488981   29973 round_trippers.go:580]     Audit-Id: c434e330-1d34-4660-abbd-b5b47869d734
	I1107 23:27:17.488989   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.488999   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.489017   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.489169   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:17.489533   29973 pod_ready.go:92] pod "etcd-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:27:17.489554   29973 pod_ready.go:81] duration metric: took 5.86293ms waiting for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:17.489579   29973 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:17.489641   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553062
	I1107 23:27:17.489652   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.489663   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.489672   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.491589   29973 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:27:17.491606   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.491616   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.491624   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.491632   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.491641   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.491651   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.491666   29973 round_trippers.go:580]     Audit-Id: 3cc815b2-6f6c-4d04-896f-32dfda297ea8
	I1107 23:27:17.491874   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553062","namespace":"kube-system","uid":"30896fa0-3d8f-4861-bdf5-ad94796ad097","resourceVersion":"406","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.246:8443","kubernetes.io/config.hash":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.mirror":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.seen":"2023-11-07T23:26:57.103263110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1107 23:27:17.492242   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:17.492256   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.492263   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.492269   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.493913   29973 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:27:17.493931   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.493941   29973 round_trippers.go:580]     Audit-Id: e49400ce-ff6d-4b0f-ab14-3cb2d89d2138
	I1107 23:27:17.493949   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.493967   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.493975   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.493983   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.493995   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.494120   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:17.494494   29973 pod_ready.go:92] pod "kube-apiserver-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:27:17.494513   29973 pod_ready.go:81] duration metric: took 4.922573ms waiting for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:17.494522   29973 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:17.494569   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553062
	I1107 23:27:17.494581   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.494589   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.494595   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.496166   29973 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:27:17.496183   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.496192   29973 round_trippers.go:580]     Audit-Id: 65a0cef8-b3ea-4489-8a6f-7ef5db1b5e10
	I1107 23:27:17.496200   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.496209   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.496216   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.496223   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.496229   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.496553   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553062","namespace":"kube-system","uid":"5a895945-b908-44ba-a1c8-93245f6a93f5","resourceVersion":"407","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.mirror":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.seen":"2023-11-07T23:26:57.103264314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1107 23:27:17.530242   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:17.530272   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.530284   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.530293   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.533041   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:17.533060   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.533066   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.533071   29973 round_trippers.go:580]     Audit-Id: c6e8b0bd-1fef-47c4-a628-3ee3111fbfab
	I1107 23:27:17.533077   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.533082   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.533087   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.533092   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.533422   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:17.533744   29973 pod_ready.go:92] pod "kube-controller-manager-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:27:17.533762   29973 pod_ready.go:81] duration metric: took 39.23411ms waiting for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:17.533776   29973 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:17.730177   29973 request.go:629] Waited for 196.341264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:27:17.730236   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:27:17.730258   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.730266   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.730274   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.733999   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:17.734020   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.734029   29973 round_trippers.go:580]     Audit-Id: 2fde5223-f50c-47af-8a0d-3472d94fcbf9
	I1107 23:27:17.734037   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.734044   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.734052   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.734060   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.734072   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.734665   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-944rz","generateName":"kube-proxy-","namespace":"kube-system","uid":"db20b1cf-b422-4649-a6e1-4549c4c56f33","resourceVersion":"378","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1107 23:27:17.930451   29973 request.go:629] Waited for 195.379943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:17.930541   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:17.930549   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:17.930561   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:17.930582   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:17.933267   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:17.933290   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:17.933303   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:17 GMT
	I1107 23:27:17.933310   29973 round_trippers.go:580]     Audit-Id: 01d13e99-041f-4ec6-917b-6dbe4a56b292
	I1107 23:27:17.933318   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:17.933324   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:17.933332   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:17.933339   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:17.933880   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:17.934177   29973 pod_ready.go:92] pod "kube-proxy-944rz" in "kube-system" namespace has status "Ready":"True"
	I1107 23:27:17.934193   29973 pod_ready.go:81] duration metric: took 400.409243ms waiting for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:17.934206   29973 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:18.130659   29973 request.go:629] Waited for 196.390157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:27:18.130714   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:27:18.130731   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:18.130738   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:18.130744   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:18.133801   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:18.133822   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:18.133832   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:18.133840   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:18.133847   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:18 GMT
	I1107 23:27:18.133854   29973 round_trippers.go:580]     Audit-Id: da8808e4-548c-4cb7-8863-160134626724
	I1107 23:27:18.133862   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:18.133871   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:18.134100   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"404","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1107 23:27:18.329859   29973 request.go:629] Waited for 195.319513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:18.329942   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:27:18.329953   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:18.329964   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:18.329973   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:18.334384   29973 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:27:18.334409   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:18.334419   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:18 GMT
	I1107 23:27:18.334427   29973 round_trippers.go:580]     Audit-Id: 42cecef3-0efb-409e-b424-058fa4646632
	I1107 23:27:18.334437   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:18.334449   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:18.334457   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:18.334465   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:18.335600   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:27:18.335909   29973 pod_ready.go:92] pod "kube-scheduler-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:27:18.335925   29973 pod_ready.go:81] duration metric: took 401.712801ms waiting for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:27:18.335935   29973 pod_ready.go:38] duration metric: took 2.393743045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:27:18.335948   29973 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:27:18.335998   29973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:27:18.351923   29973 command_runner.go:130] > 1071
	I1107 23:27:18.351970   29973 api_server.go:72] duration metric: took 7.597477157s to wait for apiserver process to appear ...
	I1107 23:27:18.351982   29973 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:27:18.351999   29973 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1107 23:27:18.358257   29973 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1107 23:27:18.358347   29973 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1107 23:27:18.358359   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:18.358368   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:18.358375   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:18.359370   29973 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1107 23:27:18.359381   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:18.359387   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:18.359392   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:18.359398   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:18.359404   29973 round_trippers.go:580]     Content-Length: 264
	I1107 23:27:18.359411   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:18 GMT
	I1107 23:27:18.359415   29973 round_trippers.go:580]     Audit-Id: f33ae0af-1cb4-4d52-ad25-c80fb81bba1c
	I1107 23:27:18.359423   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:18.359438   29973 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1107 23:27:18.359496   29973 api_server.go:141] control plane version: v1.28.3
	I1107 23:27:18.359521   29973 api_server.go:131] duration metric: took 7.534107ms to wait for apiserver health ...
	I1107 23:27:18.359530   29973 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:27:18.529864   29973 request.go:629] Waited for 170.278396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:27:18.529931   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:27:18.529936   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:18.529943   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:18.529952   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:18.533789   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:18.533809   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:18.533819   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:18.533827   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:18.533835   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:18.533844   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:18 GMT
	I1107 23:27:18.533853   29973 round_trippers.go:580]     Audit-Id: 8bf9a215-d492-4354-ab65-fd210a3cbe85
	I1107 23:27:18.533862   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:18.535491   29973 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"411","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53995 chars]
	I1107 23:27:18.537148   29973 system_pods.go:59] 8 kube-system pods found
	I1107 23:27:18.537178   29973 system_pods.go:61] "coredns-5dd5756b68-6ggfr" [785c6064-d793-4959-8e34-28b4fc2549fc] Running
	I1107 23:27:18.537185   29973 system_pods.go:61] "etcd-multinode-553062" [3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1] Running
	I1107 23:27:18.537191   29973 system_pods.go:61] "kindnet-9stvx" [a9981d59-dbff-456f-9024-2754c2a9d0c6] Running
	I1107 23:27:18.537197   29973 system_pods.go:61] "kube-apiserver-multinode-553062" [30896fa0-3d8f-4861-bdf5-ad94796ad097] Running
	I1107 23:27:18.537205   29973 system_pods.go:61] "kube-controller-manager-multinode-553062" [5a895945-b908-44ba-a1c8-93245f6a93f5] Running
	I1107 23:27:18.537215   29973 system_pods.go:61] "kube-proxy-944rz" [db20b1cf-b422-4649-a6e1-4549c4c56f33] Running
	I1107 23:27:18.537223   29973 system_pods.go:61] "kube-scheduler-multinode-553062" [334a75af-c6cb-45ac-a020-8afc3f4a4e7a] Running
	I1107 23:27:18.537233   29973 system_pods.go:61] "storage-provisioner" [85179396-d02a-404a-a93e-e10db8c673b6] Running
	I1107 23:27:18.537241   29973 system_pods.go:74] duration metric: took 177.704949ms to wait for pod list to return data ...
	I1107 23:27:18.537253   29973 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:27:18.730698   29973 request.go:629] Waited for 193.381582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:27:18.730787   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:27:18.730803   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:18.730813   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:18.730826   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:18.733702   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:18.733719   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:18.733726   29973 round_trippers.go:580]     Audit-Id: b09d4e00-cd91-4ebf-942a-a8e77b9b2dad
	I1107 23:27:18.733731   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:18.733742   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:18.733750   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:18.733757   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:18.733767   29973 round_trippers.go:580]     Content-Length: 261
	I1107 23:27:18.733774   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:18 GMT
	I1107 23:27:18.733797   29973 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6fff4cd8-da06-46a3-88c6-f639ebaea0a1","resourceVersion":"312","creationTimestamp":"2023-11-07T23:27:10Z"}}]}
	I1107 23:27:18.733982   29973 default_sa.go:45] found service account: "default"
	I1107 23:27:18.734004   29973 default_sa.go:55] duration metric: took 196.743946ms for default service account to be created ...
	I1107 23:27:18.734011   29973 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:27:18.930449   29973 request.go:629] Waited for 196.385034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:27:18.930514   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:27:18.930519   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:18.930527   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:18.930533   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:18.934164   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:18.934187   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:18.934197   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:18.934205   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:18.934212   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:18.934219   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:18 GMT
	I1107 23:27:18.934227   29973 round_trippers.go:580]     Audit-Id: 9c5b43ae-4bf8-4265-9030-061927b10c21
	I1107 23:27:18.934238   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:18.934904   29973 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"411","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53995 chars]
	I1107 23:27:18.936549   29973 system_pods.go:86] 8 kube-system pods found
	I1107 23:27:18.936571   29973 system_pods.go:89] "coredns-5dd5756b68-6ggfr" [785c6064-d793-4959-8e34-28b4fc2549fc] Running
	I1107 23:27:18.936579   29973 system_pods.go:89] "etcd-multinode-553062" [3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1] Running
	I1107 23:27:18.936585   29973 system_pods.go:89] "kindnet-9stvx" [a9981d59-dbff-456f-9024-2754c2a9d0c6] Running
	I1107 23:27:18.936591   29973 system_pods.go:89] "kube-apiserver-multinode-553062" [30896fa0-3d8f-4861-bdf5-ad94796ad097] Running
	I1107 23:27:18.936598   29973 system_pods.go:89] "kube-controller-manager-multinode-553062" [5a895945-b908-44ba-a1c8-93245f6a93f5] Running
	I1107 23:27:18.936604   29973 system_pods.go:89] "kube-proxy-944rz" [db20b1cf-b422-4649-a6e1-4549c4c56f33] Running
	I1107 23:27:18.936613   29973 system_pods.go:89] "kube-scheduler-multinode-553062" [334a75af-c6cb-45ac-a020-8afc3f4a4e7a] Running
	I1107 23:27:18.936621   29973 system_pods.go:89] "storage-provisioner" [85179396-d02a-404a-a93e-e10db8c673b6] Running
	I1107 23:27:18.936635   29973 system_pods.go:126] duration metric: took 202.617064ms to wait for k8s-apps to be running ...
	I1107 23:27:18.936644   29973 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:27:18.936693   29973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:27:18.951713   29973 system_svc.go:56] duration metric: took 15.060933ms WaitForService to wait for kubelet.
	I1107 23:27:18.951741   29973 kubeadm.go:581] duration metric: took 8.197248963s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:27:18.951761   29973 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:27:19.130226   29973 request.go:629] Waited for 178.387558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1107 23:27:19.130290   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1107 23:27:19.130296   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:19.130305   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:19.130314   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:19.133052   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:19.133076   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:19.133086   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:19.133095   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:19 GMT
	I1107 23:27:19.133103   29973 round_trippers.go:580]     Audit-Id: 6207ce5a-35eb-4406-bbf5-fd1d1ba307c5
	I1107 23:27:19.133111   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:19.133118   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:19.133127   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:19.133749   29973 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I1107 23:27:19.134076   29973 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:27:19.134094   29973 node_conditions.go:123] node cpu capacity is 2
	I1107 23:27:19.134104   29973 node_conditions.go:105] duration metric: took 182.337724ms to run NodePressure ...
	I1107 23:27:19.134113   29973 start.go:228] waiting for startup goroutines ...
	I1107 23:27:19.134126   29973 start.go:233] waiting for cluster config update ...
	I1107 23:27:19.134134   29973 start.go:242] writing updated cluster config ...
	I1107 23:27:19.136559   29973 out.go:177] 
	I1107 23:27:19.138234   29973 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:27:19.138316   29973 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:27:19.140194   29973 out.go:177] * Starting worker node multinode-553062-m02 in cluster multinode-553062
	I1107 23:27:19.141637   29973 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:27:19.141657   29973 cache.go:56] Caching tarball of preloaded images
	I1107 23:27:19.141732   29973 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:27:19.141743   29973 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:27:19.141804   29973 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:27:19.141942   29973 start.go:365] acquiring machines lock for multinode-553062-m02: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:27:19.141978   29973 start.go:369] acquired machines lock for "multinode-553062-m02" in 19.438µs
	I1107 23:27:19.141993   29973 start.go:93] Provisioning new machine with config: &{Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-5
53062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:27:19.142051   29973 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1107 23:27:19.143842   29973 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1107 23:27:19.143904   29973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:27:19.143930   29973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:27:19.157527   29973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42837
	I1107 23:27:19.157892   29973 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:27:19.158315   29973 main.go:141] libmachine: Using API Version  1
	I1107 23:27:19.158338   29973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:27:19.158638   29973 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:27:19.158831   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetMachineName
	I1107 23:27:19.159018   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:27:19.159188   29973 start.go:159] libmachine.API.Create for "multinode-553062" (driver="kvm2")
	I1107 23:27:19.159219   29973 client.go:168] LocalClient.Create starting
	I1107 23:27:19.159244   29973 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem
	I1107 23:27:19.159272   29973 main.go:141] libmachine: Decoding PEM data...
	I1107 23:27:19.159286   29973 main.go:141] libmachine: Parsing certificate...
	I1107 23:27:19.159336   29973 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem
	I1107 23:27:19.159354   29973 main.go:141] libmachine: Decoding PEM data...
	I1107 23:27:19.159365   29973 main.go:141] libmachine: Parsing certificate...
	I1107 23:27:19.159381   29973 main.go:141] libmachine: Running pre-create checks...
	I1107 23:27:19.159390   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .PreCreateCheck
	I1107 23:27:19.159575   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetConfigRaw
	I1107 23:27:19.159926   29973 main.go:141] libmachine: Creating machine...
	I1107 23:27:19.159942   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .Create
	I1107 23:27:19.160086   29973 main.go:141] libmachine: (multinode-553062-m02) Creating KVM machine...
	I1107 23:27:19.161209   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found existing default KVM network
	I1107 23:27:19.161379   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found existing private KVM network mk-multinode-553062
	I1107 23:27:19.161588   29973 main.go:141] libmachine: (multinode-553062-m02) Setting up store path in /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02 ...
	I1107 23:27:19.161610   29973 main.go:141] libmachine: (multinode-553062-m02) Building disk image from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1107 23:27:19.161632   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:19.161568   30347 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:27:19.161775   29973 main.go:141] libmachine: (multinode-553062-m02) Downloading /home/jenkins/minikube-integration/17585-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1107 23:27:19.361792   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:19.361669   30347 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa...
	I1107 23:27:19.501927   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:19.501809   30347 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/multinode-553062-m02.rawdisk...
	I1107 23:27:19.501958   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Writing magic tar header
	I1107 23:27:19.501973   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Writing SSH key tar header
	I1107 23:27:19.501986   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:19.501907   30347 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02 ...
	I1107 23:27:19.502130   29973 main.go:141] libmachine: (multinode-553062-m02) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02 (perms=drwx------)
	I1107 23:27:19.502155   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02
	I1107 23:27:19.502171   29973 main.go:141] libmachine: (multinode-553062-m02) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines (perms=drwxr-xr-x)
	I1107 23:27:19.502188   29973 main.go:141] libmachine: (multinode-553062-m02) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube (perms=drwxr-xr-x)
	I1107 23:27:19.502201   29973 main.go:141] libmachine: (multinode-553062-m02) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647 (perms=drwxrwxr-x)
	I1107 23:27:19.502219   29973 main.go:141] libmachine: (multinode-553062-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1107 23:27:19.502233   29973 main.go:141] libmachine: (multinode-553062-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1107 23:27:19.502253   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines
	I1107 23:27:19.502267   29973 main.go:141] libmachine: (multinode-553062-m02) Creating domain...
	I1107 23:27:19.502286   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:27:19.502302   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647
	I1107 23:27:19.502322   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1107 23:27:19.502335   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Checking permissions on dir: /home/jenkins
	I1107 23:27:19.502349   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Checking permissions on dir: /home
	I1107 23:27:19.502361   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Skipping /home - not owner
	I1107 23:27:19.503216   29973 main.go:141] libmachine: (multinode-553062-m02) define libvirt domain using xml: 
	I1107 23:27:19.503240   29973 main.go:141] libmachine: (multinode-553062-m02) <domain type='kvm'>
	I1107 23:27:19.503250   29973 main.go:141] libmachine: (multinode-553062-m02)   <name>multinode-553062-m02</name>
	I1107 23:27:19.503260   29973 main.go:141] libmachine: (multinode-553062-m02)   <memory unit='MiB'>2200</memory>
	I1107 23:27:19.503271   29973 main.go:141] libmachine: (multinode-553062-m02)   <vcpu>2</vcpu>
	I1107 23:27:19.503286   29973 main.go:141] libmachine: (multinode-553062-m02)   <features>
	I1107 23:27:19.503299   29973 main.go:141] libmachine: (multinode-553062-m02)     <acpi/>
	I1107 23:27:19.503311   29973 main.go:141] libmachine: (multinode-553062-m02)     <apic/>
	I1107 23:27:19.503323   29973 main.go:141] libmachine: (multinode-553062-m02)     <pae/>
	I1107 23:27:19.503334   29973 main.go:141] libmachine: (multinode-553062-m02)     
	I1107 23:27:19.503344   29973 main.go:141] libmachine: (multinode-553062-m02)   </features>
	I1107 23:27:19.503360   29973 main.go:141] libmachine: (multinode-553062-m02)   <cpu mode='host-passthrough'>
	I1107 23:27:19.503385   29973 main.go:141] libmachine: (multinode-553062-m02)   
	I1107 23:27:19.503408   29973 main.go:141] libmachine: (multinode-553062-m02)   </cpu>
	I1107 23:27:19.503420   29973 main.go:141] libmachine: (multinode-553062-m02)   <os>
	I1107 23:27:19.503433   29973 main.go:141] libmachine: (multinode-553062-m02)     <type>hvm</type>
	I1107 23:27:19.503451   29973 main.go:141] libmachine: (multinode-553062-m02)     <boot dev='cdrom'/>
	I1107 23:27:19.503464   29973 main.go:141] libmachine: (multinode-553062-m02)     <boot dev='hd'/>
	I1107 23:27:19.503479   29973 main.go:141] libmachine: (multinode-553062-m02)     <bootmenu enable='no'/>
	I1107 23:27:19.503497   29973 main.go:141] libmachine: (multinode-553062-m02)   </os>
	I1107 23:27:19.503511   29973 main.go:141] libmachine: (multinode-553062-m02)   <devices>
	I1107 23:27:19.503525   29973 main.go:141] libmachine: (multinode-553062-m02)     <disk type='file' device='cdrom'>
	I1107 23:27:19.503544   29973 main.go:141] libmachine: (multinode-553062-m02)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/boot2docker.iso'/>
	I1107 23:27:19.503561   29973 main.go:141] libmachine: (multinode-553062-m02)       <target dev='hdc' bus='scsi'/>
	I1107 23:27:19.503576   29973 main.go:141] libmachine: (multinode-553062-m02)       <readonly/>
	I1107 23:27:19.503589   29973 main.go:141] libmachine: (multinode-553062-m02)     </disk>
	I1107 23:27:19.503605   29973 main.go:141] libmachine: (multinode-553062-m02)     <disk type='file' device='disk'>
	I1107 23:27:19.503620   29973 main.go:141] libmachine: (multinode-553062-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1107 23:27:19.503639   29973 main.go:141] libmachine: (multinode-553062-m02)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/multinode-553062-m02.rawdisk'/>
	I1107 23:27:19.503661   29973 main.go:141] libmachine: (multinode-553062-m02)       <target dev='hda' bus='virtio'/>
	I1107 23:27:19.503676   29973 main.go:141] libmachine: (multinode-553062-m02)     </disk>
	I1107 23:27:19.503690   29973 main.go:141] libmachine: (multinode-553062-m02)     <interface type='network'>
	I1107 23:27:19.503706   29973 main.go:141] libmachine: (multinode-553062-m02)       <source network='mk-multinode-553062'/>
	I1107 23:27:19.503720   29973 main.go:141] libmachine: (multinode-553062-m02)       <model type='virtio'/>
	I1107 23:27:19.503734   29973 main.go:141] libmachine: (multinode-553062-m02)     </interface>
	I1107 23:27:19.503752   29973 main.go:141] libmachine: (multinode-553062-m02)     <interface type='network'>
	I1107 23:27:19.503767   29973 main.go:141] libmachine: (multinode-553062-m02)       <source network='default'/>
	I1107 23:27:19.503780   29973 main.go:141] libmachine: (multinode-553062-m02)       <model type='virtio'/>
	I1107 23:27:19.503795   29973 main.go:141] libmachine: (multinode-553062-m02)     </interface>
	I1107 23:27:19.503808   29973 main.go:141] libmachine: (multinode-553062-m02)     <serial type='pty'>
	I1107 23:27:19.503824   29973 main.go:141] libmachine: (multinode-553062-m02)       <target port='0'/>
	I1107 23:27:19.503837   29973 main.go:141] libmachine: (multinode-553062-m02)     </serial>
	I1107 23:27:19.503858   29973 main.go:141] libmachine: (multinode-553062-m02)     <console type='pty'>
	I1107 23:27:19.503882   29973 main.go:141] libmachine: (multinode-553062-m02)       <target type='serial' port='0'/>
	I1107 23:27:19.503896   29973 main.go:141] libmachine: (multinode-553062-m02)     </console>
	I1107 23:27:19.503907   29973 main.go:141] libmachine: (multinode-553062-m02)     <rng model='virtio'>
	I1107 23:27:19.503923   29973 main.go:141] libmachine: (multinode-553062-m02)       <backend model='random'>/dev/random</backend>
	I1107 23:27:19.503934   29973 main.go:141] libmachine: (multinode-553062-m02)     </rng>
	I1107 23:27:19.503945   29973 main.go:141] libmachine: (multinode-553062-m02)     
	I1107 23:27:19.503960   29973 main.go:141] libmachine: (multinode-553062-m02)     
	I1107 23:27:19.503973   29973 main.go:141] libmachine: (multinode-553062-m02)   </devices>
	I1107 23:27:19.503985   29973 main.go:141] libmachine: (multinode-553062-m02) </domain>
	I1107 23:27:19.504000   29973 main.go:141] libmachine: (multinode-553062-m02) 
	I1107 23:27:19.510586   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:51:25:ca in network default
	I1107 23:27:19.511134   29973 main.go:141] libmachine: (multinode-553062-m02) Ensuring networks are active...
	I1107 23:27:19.511153   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:19.511846   29973 main.go:141] libmachine: (multinode-553062-m02) Ensuring network default is active
	I1107 23:27:19.512126   29973 main.go:141] libmachine: (multinode-553062-m02) Ensuring network mk-multinode-553062 is active
	I1107 23:27:19.512548   29973 main.go:141] libmachine: (multinode-553062-m02) Getting domain xml...
	I1107 23:27:19.513292   29973 main.go:141] libmachine: (multinode-553062-m02) Creating domain...
	I1107 23:27:20.738000   29973 main.go:141] libmachine: (multinode-553062-m02) Waiting to get IP...
	I1107 23:27:20.738871   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:20.739250   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:20.739269   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:20.739226   30347 retry.go:31] will retry after 190.946222ms: waiting for machine to come up
	I1107 23:27:20.931556   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:20.932061   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:20.932098   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:20.931998   30347 retry.go:31] will retry after 367.424306ms: waiting for machine to come up
	I1107 23:27:21.300474   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:21.300930   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:21.300964   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:21.300895   30347 retry.go:31] will retry after 449.491392ms: waiting for machine to come up
	I1107 23:27:21.752803   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:21.753239   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:21.753262   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:21.753183   30347 retry.go:31] will retry after 448.36111ms: waiting for machine to come up
	I1107 23:27:22.202780   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:22.203170   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:22.203200   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:22.203136   30347 retry.go:31] will retry after 466.394228ms: waiting for machine to come up
	I1107 23:27:22.670949   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:22.671382   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:22.671410   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:22.671345   30347 retry.go:31] will retry after 728.816569ms: waiting for machine to come up
	I1107 23:27:23.402318   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:23.402767   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:23.402796   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:23.402708   30347 retry.go:31] will retry after 924.484197ms: waiting for machine to come up
	I1107 23:27:24.329055   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:24.329488   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:24.329520   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:24.329437   30347 retry.go:31] will retry after 1.48776522s: waiting for machine to come up
	I1107 23:27:25.819377   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:25.819761   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:25.819785   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:25.819734   30347 retry.go:31] will retry after 1.239434044s: waiting for machine to come up
	I1107 23:27:27.060945   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:27.061367   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:27.061395   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:27.061304   30347 retry.go:31] will retry after 1.784116229s: waiting for machine to come up
	I1107 23:27:28.848328   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:28.848888   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:28.848920   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:28.848810   30347 retry.go:31] will retry after 2.352554445s: waiting for machine to come up
	I1107 23:27:31.203871   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:31.204346   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:31.204381   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:31.204282   30347 retry.go:31] will retry after 3.427584632s: waiting for machine to come up
	I1107 23:27:34.633804   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:34.634160   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:34.634178   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:34.634130   30347 retry.go:31] will retry after 3.894341089s: waiting for machine to come up
	I1107 23:27:38.533179   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:38.533598   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find current IP address of domain multinode-553062-m02 in network mk-multinode-553062
	I1107 23:27:38.533621   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | I1107 23:27:38.533554   30347 retry.go:31] will retry after 4.527377268s: waiting for machine to come up
	I1107 23:27:43.065294   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.065687   29973 main.go:141] libmachine: (multinode-553062-m02) Found IP for machine: 192.168.39.137
	I1107 23:27:43.065719   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has current primary IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.065730   29973 main.go:141] libmachine: (multinode-553062-m02) Reserving static IP address...
	I1107 23:27:43.066088   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | unable to find host DHCP lease matching {name: "multinode-553062-m02", mac: "52:54:00:49:ff:75", ip: "192.168.39.137"} in network mk-multinode-553062
	I1107 23:27:43.134426   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Getting to WaitForSSH function...
	I1107 23:27:43.134463   29973 main.go:141] libmachine: (multinode-553062-m02) Reserved static IP address: 192.168.39.137
	I1107 23:27:43.134477   29973 main.go:141] libmachine: (multinode-553062-m02) Waiting for SSH to be available...
	I1107 23:27:43.137007   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.137293   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:43.137324   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.137434   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Using SSH client type: external
	I1107 23:27:43.137461   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa (-rw-------)
	I1107 23:27:43.137495   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1107 23:27:43.137509   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | About to run SSH command:
	I1107 23:27:43.137526   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | exit 0
	I1107 23:27:43.224514   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | SSH cmd err, output: <nil>: 
	I1107 23:27:43.224787   29973 main.go:141] libmachine: (multinode-553062-m02) KVM machine creation complete!
	I1107 23:27:43.225110   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetConfigRaw
	I1107 23:27:43.225652   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:27:43.225835   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:27:43.225985   29973 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1107 23:27:43.226004   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetState
	I1107 23:27:43.227188   29973 main.go:141] libmachine: Detecting operating system of created instance...
	I1107 23:27:43.227206   29973 main.go:141] libmachine: Waiting for SSH to be available...
	I1107 23:27:43.227216   29973 main.go:141] libmachine: Getting to WaitForSSH function...
	I1107 23:27:43.227233   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:43.229742   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.230152   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:43.230188   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.230267   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:43.230472   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:43.230629   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:43.230739   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:43.230878   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:27:43.231246   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:27:43.231257   29973 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1107 23:27:43.343731   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:27:43.343753   29973 main.go:141] libmachine: Detecting the provisioner...
	I1107 23:27:43.343761   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:43.346526   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.346851   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:43.346889   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.347040   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:43.347211   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:43.347387   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:43.347529   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:43.347704   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:27:43.348179   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:27:43.348198   29973 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1107 23:27:43.461343   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb75713b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1107 23:27:43.461437   29973 main.go:141] libmachine: found compatible host: buildroot
	I1107 23:27:43.461451   29973 main.go:141] libmachine: Provisioning with buildroot...
	I1107 23:27:43.461463   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetMachineName
	I1107 23:27:43.461723   29973 buildroot.go:166] provisioning hostname "multinode-553062-m02"
	I1107 23:27:43.461747   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetMachineName
	I1107 23:27:43.461906   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:43.464332   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.464738   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:43.464765   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.464941   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:43.465148   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:43.465282   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:43.465393   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:43.465540   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:27:43.465987   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:27:43.466012   29973 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553062-m02 && echo "multinode-553062-m02" | sudo tee /etc/hostname
	I1107 23:27:43.593789   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553062-m02
	
	I1107 23:27:43.593822   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:43.596634   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.597022   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:43.597054   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.597248   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:43.597428   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:43.597560   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:43.597686   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:43.597831   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:27:43.598146   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:27:43.598164   29973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553062-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553062-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553062-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:27:43.721249   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:27:43.721278   29973 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1107 23:27:43.721292   29973 buildroot.go:174] setting up certificates
	I1107 23:27:43.721301   29973 provision.go:83] configureAuth start
	I1107 23:27:43.721310   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetMachineName
	I1107 23:27:43.721568   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetIP
	I1107 23:27:43.723821   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.724205   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:43.724234   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.724390   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:43.726456   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.726720   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:43.726751   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.726856   29973 provision.go:138] copyHostCerts
	I1107 23:27:43.726878   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:27:43.726906   29973 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1107 23:27:43.726915   29973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:27:43.726975   29973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1107 23:27:43.727039   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:27:43.727058   29973 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1107 23:27:43.727064   29973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:27:43.727112   29973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1107 23:27:43.727183   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:27:43.727206   29973 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1107 23:27:43.727216   29973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:27:43.727251   29973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1107 23:27:43.727303   29973 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.multinode-553062-m02 san=[192.168.39.137 192.168.39.137 localhost 127.0.0.1 minikube multinode-553062-m02]
	I1107 23:27:43.859682   29973 provision.go:172] copyRemoteCerts
	I1107 23:27:43.859736   29973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:27:43.859758   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:43.862265   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.862608   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:43.862635   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:43.862826   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:43.863022   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:43.863207   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:43.863322   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa Username:docker}
	I1107 23:27:43.950079   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:27:43.950146   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:27:43.973647   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:27:43.973705   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1107 23:27:43.996920   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:27:43.996987   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:27:44.023202   29973 provision.go:86] duration metric: configureAuth took 301.891261ms
	I1107 23:27:44.023228   29973 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:27:44.023398   29973 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:27:44.023460   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:44.025972   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.026316   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:44.026342   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.026528   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:44.026695   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:44.026826   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:44.026937   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:44.027074   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:27:44.027523   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:27:44.027550   29973 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:27:44.330829   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:27:44.330850   29973 main.go:141] libmachine: Checking connection to Docker...
	I1107 23:27:44.330859   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetURL
	I1107 23:27:44.332163   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | Using libvirt version 6000000
	I1107 23:27:44.334561   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.334919   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:44.334950   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.335180   29973 main.go:141] libmachine: Docker is up and running!
	I1107 23:27:44.335192   29973 main.go:141] libmachine: Reticulating splines...
	I1107 23:27:44.335198   29973 client.go:171] LocalClient.Create took 25.175971206s
	I1107 23:27:44.335222   29973 start.go:167] duration metric: libmachine.API.Create for "multinode-553062" took 25.176035787s
	I1107 23:27:44.335232   29973 start.go:300] post-start starting for "multinode-553062-m02" (driver="kvm2")
	I1107 23:27:44.335240   29973 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:27:44.335259   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:27:44.335473   29973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:27:44.335496   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:44.337479   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.337774   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:44.337803   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.337904   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:44.338083   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:44.338237   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:44.338373   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa Username:docker}
	I1107 23:27:44.426014   29973 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:27:44.430367   29973 command_runner.go:130] > NAME=Buildroot
	I1107 23:27:44.430386   29973 command_runner.go:130] > VERSION=2021.02.12-1-gb75713b-dirty
	I1107 23:27:44.430392   29973 command_runner.go:130] > ID=buildroot
	I1107 23:27:44.430400   29973 command_runner.go:130] > VERSION_ID=2021.02.12
	I1107 23:27:44.430408   29973 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1107 23:27:44.430467   29973 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:27:44.430490   29973 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1107 23:27:44.430556   29973 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1107 23:27:44.430628   29973 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1107 23:27:44.430638   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /etc/ssl/certs/168482.pem
	I1107 23:27:44.430718   29973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:27:44.438577   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:27:44.461344   29973 start.go:303] post-start completed in 126.10091ms
	I1107 23:27:44.461393   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetConfigRaw
	I1107 23:27:44.461919   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetIP
	I1107 23:27:44.464587   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.464962   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:44.465003   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.465215   29973 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:27:44.465386   29973 start.go:128] duration metric: createHost completed in 25.323326434s
	I1107 23:27:44.465408   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:44.467419   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.467736   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:44.467768   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.467916   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:44.468074   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:44.468200   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:44.468326   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:44.468491   29973 main.go:141] libmachine: Using SSH client type: native
	I1107 23:27:44.468790   29973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:27:44.468801   29973 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:27:44.585519   29973 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699399664.570044631
	
	I1107 23:27:44.585538   29973 fix.go:206] guest clock: 1699399664.570044631
	I1107 23:27:44.585550   29973 fix.go:219] Guest: 2023-11-07 23:27:44.570044631 +0000 UTC Remote: 2023-11-07 23:27:44.465397762 +0000 UTC m=+93.326926446 (delta=104.646869ms)
	I1107 23:27:44.585569   29973 fix.go:190] guest clock delta is within tolerance: 104.646869ms
	I1107 23:27:44.585576   29973 start.go:83] releasing machines lock for "multinode-553062-m02", held for 25.443589174s
	I1107 23:27:44.585597   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:27:44.585846   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetIP
	I1107 23:27:44.588282   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.588658   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:44.588686   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.591726   29973 out.go:177] * Found network options:
	I1107 23:27:44.593407   29973 out.go:177]   - NO_PROXY=192.168.39.246
	W1107 23:27:44.594790   29973 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 23:27:44.594842   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:27:44.595336   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:27:44.595507   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:27:44.595583   29973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:27:44.595622   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	W1107 23:27:44.595712   29973 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 23:27:44.595779   29973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:27:44.595801   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:27:44.598264   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.598542   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:44.598581   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.598606   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.598712   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:44.598879   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:44.598956   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:44.598981   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:44.599032   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:44.599177   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:27:44.599253   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa Username:docker}
	I1107 23:27:44.599361   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:27:44.599513   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:27:44.599634   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa Username:docker}
	I1107 23:27:44.836175   29973 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 23:27:44.836272   29973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:27:44.842243   29973 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1107 23:27:44.842283   29973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:27:44.842337   29973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:27:44.855805   29973 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1107 23:27:44.855839   29973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:27:44.855848   29973 start.go:472] detecting cgroup driver to use...
	I1107 23:27:44.855908   29973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:27:44.868310   29973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:27:44.880590   29973 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:27:44.880640   29973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:27:44.892225   29973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:27:44.903689   29973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:27:44.916399   29973 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1107 23:27:45.003851   29973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:27:45.127984   29973 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1107 23:27:45.128024   29973 docker.go:219] disabling docker service ...
	I1107 23:27:45.128070   29973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:27:45.142553   29973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:27:45.154207   29973 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1107 23:27:45.154292   29973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:27:45.254957   29973 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1107 23:27:45.255049   29973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:27:45.363186   29973 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1107 23:27:45.363215   29973 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1107 23:27:45.363270   29973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:27:45.376027   29973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:27:45.394075   29973 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1107 23:27:45.394116   29973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:27:45.394171   29973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:27:45.403861   29973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:27:45.403904   29973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:27:45.413565   29973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:27:45.423012   29973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:27:45.432477   29973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:27:45.442292   29973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:27:45.450905   29973 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1107 23:27:45.450938   29973 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1107 23:27:45.450976   29973 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1107 23:27:45.463071   29973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:27:45.471727   29973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:27:45.575644   29973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:27:45.736157   29973 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:27:45.736228   29973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:27:45.740623   29973 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1107 23:27:45.740640   29973 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 23:27:45.740647   29973 command_runner.go:130] > Device: 16h/22d	Inode: 704         Links: 1
	I1107 23:27:45.740653   29973 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:27:45.740661   29973 command_runner.go:130] > Access: 2023-11-07 23:27:45.708610175 +0000
	I1107 23:27:45.740667   29973 command_runner.go:130] > Modify: 2023-11-07 23:27:45.708610175 +0000
	I1107 23:27:45.740672   29973 command_runner.go:130] > Change: 2023-11-07 23:27:45.708610175 +0000
	I1107 23:27:45.740675   29973 command_runner.go:130] >  Birth: -
	I1107 23:27:45.740919   29973 start.go:540] Will wait 60s for crictl version
	I1107 23:27:45.740966   29973 ssh_runner.go:195] Run: which crictl
	I1107 23:27:45.745517   29973 command_runner.go:130] > /usr/bin/crictl
	I1107 23:27:45.745582   29973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:27:45.782362   29973 command_runner.go:130] > Version:  0.1.0
	I1107 23:27:45.782385   29973 command_runner.go:130] > RuntimeName:  cri-o
	I1107 23:27:45.782390   29973 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1107 23:27:45.782395   29973 command_runner.go:130] > RuntimeApiVersion:  v1
	I1107 23:27:45.782715   29973 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1107 23:27:45.782777   29973 ssh_runner.go:195] Run: crio --version
	I1107 23:27:45.829587   29973 command_runner.go:130] > crio version 1.24.1
	I1107 23:27:45.829609   29973 command_runner.go:130] > Version:          1.24.1
	I1107 23:27:45.829625   29973 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:27:45.829630   29973 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:27:45.829635   29973 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:27:45.829652   29973 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:27:45.829659   29973 command_runner.go:130] > Compiler:         gc
	I1107 23:27:45.829674   29973 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:27:45.829684   29973 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:27:45.829697   29973 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:27:45.829703   29973 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:27:45.829708   29973 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:27:45.831105   29973 ssh_runner.go:195] Run: crio --version
	I1107 23:27:45.874035   29973 command_runner.go:130] > crio version 1.24.1
	I1107 23:27:45.874060   29973 command_runner.go:130] > Version:          1.24.1
	I1107 23:27:45.874070   29973 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:27:45.874077   29973 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:27:45.874089   29973 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:27:45.874097   29973 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:27:45.874108   29973 command_runner.go:130] > Compiler:         gc
	I1107 23:27:45.874115   29973 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:27:45.874129   29973 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:27:45.874145   29973 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:27:45.874163   29973 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:27:45.874173   29973 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:27:45.876411   29973 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1107 23:27:45.877957   29973 out.go:177]   - env NO_PROXY=192.168.39.246
	I1107 23:27:45.879257   29973 main.go:141] libmachine: (multinode-553062-m02) Calling .GetIP
	I1107 23:27:45.881562   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:45.881933   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:27:45.881953   29973 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:27:45.882158   29973 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:27:45.886269   29973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:27:45.897685   29973 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062 for IP: 192.168.39.137
	I1107 23:27:45.897706   29973 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:27:45.897855   29973 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1107 23:27:45.897918   29973 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1107 23:27:45.897936   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:27:45.897948   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:27:45.897961   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:27:45.897973   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:27:45.898023   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1107 23:27:45.898048   29973 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1107 23:27:45.898058   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:27:45.898078   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:27:45.898101   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:27:45.898122   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1107 23:27:45.898158   29973 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:27:45.898182   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem -> /usr/share/ca-certificates/16848.pem
	I1107 23:27:45.898194   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /usr/share/ca-certificates/168482.pem
	I1107 23:27:45.898206   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:27:45.898606   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:27:45.920268   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:27:45.941399   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:27:45.962880   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 23:27:45.985432   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1107 23:27:46.007036   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1107 23:27:46.028773   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:27:46.050393   29973 ssh_runner.go:195] Run: openssl version
	I1107 23:27:46.056023   29973 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1107 23:27:46.056247   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1107 23:27:46.066734   29973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1107 23:27:46.071177   29973 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:27:46.071247   29973 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:27:46.071284   29973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1107 23:27:46.076399   29973 command_runner.go:130] > 51391683
	I1107 23:27:46.076446   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1107 23:27:46.086723   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1107 23:27:46.097349   29973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1107 23:27:46.101645   29973 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:27:46.101923   29973 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:27:46.101959   29973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1107 23:27:46.107543   29973 command_runner.go:130] > 3ec20f2e
	I1107 23:27:46.107590   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:27:46.118296   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:27:46.128665   29973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:27:46.133059   29973 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:27:46.133273   29973 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:27:46.133307   29973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:27:46.138527   29973 command_runner.go:130] > b5213941
	I1107 23:27:46.138578   29973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:27:46.148914   29973 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:27:46.152669   29973 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:27:46.152719   29973 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:27:46.152802   29973 ssh_runner.go:195] Run: crio config
	I1107 23:27:46.203960   29973 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1107 23:27:46.203983   29973 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1107 23:27:46.203990   29973 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1107 23:27:46.203994   29973 command_runner.go:130] > #
	I1107 23:27:46.204012   29973 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1107 23:27:46.204023   29973 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1107 23:27:46.204033   29973 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1107 23:27:46.204047   29973 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1107 23:27:46.204053   29973 command_runner.go:130] > # reload'.
	I1107 23:27:46.204065   29973 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1107 23:27:46.204074   29973 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1107 23:27:46.204083   29973 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1107 23:27:46.204089   29973 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1107 23:27:46.204095   29973 command_runner.go:130] > [crio]
	I1107 23:27:46.204101   29973 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1107 23:27:46.204108   29973 command_runner.go:130] > # containers images, in this directory.
	I1107 23:27:46.204114   29973 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1107 23:27:46.204132   29973 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1107 23:27:46.204147   29973 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1107 23:27:46.204158   29973 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1107 23:27:46.204168   29973 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1107 23:27:46.204180   29973 command_runner.go:130] > storage_driver = "overlay"
	I1107 23:27:46.204191   29973 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1107 23:27:46.204203   29973 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1107 23:27:46.204213   29973 command_runner.go:130] > storage_option = [
	I1107 23:27:46.204222   29973 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1107 23:27:46.204232   29973 command_runner.go:130] > ]
	I1107 23:27:46.204242   29973 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1107 23:27:46.204255   29973 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1107 23:27:46.204263   29973 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1107 23:27:46.204277   29973 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1107 23:27:46.204288   29973 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1107 23:27:46.204297   29973 command_runner.go:130] > # always happen on a node reboot
	I1107 23:27:46.204306   29973 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1107 23:27:46.204320   29973 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1107 23:27:46.204330   29973 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1107 23:27:46.204350   29973 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1107 23:27:46.204363   29973 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1107 23:27:46.204379   29973 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1107 23:27:46.204395   29973 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1107 23:27:46.204406   29973 command_runner.go:130] > # internal_wipe = true
	I1107 23:27:46.204415   29973 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1107 23:27:46.204429   29973 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1107 23:27:46.204442   29973 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1107 23:27:46.204453   29973 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1107 23:27:46.204466   29973 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1107 23:27:46.204475   29973 command_runner.go:130] > [crio.api]
	I1107 23:27:46.204485   29973 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1107 23:27:46.204496   29973 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1107 23:27:46.204507   29973 command_runner.go:130] > # IP address on which the stream server will listen.
	I1107 23:27:46.204519   29973 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1107 23:27:46.204530   29973 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1107 23:27:46.204542   29973 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1107 23:27:46.204552   29973 command_runner.go:130] > # stream_port = "0"
	I1107 23:27:46.204562   29973 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1107 23:27:46.204573   29973 command_runner.go:130] > # stream_enable_tls = false
	I1107 23:27:46.204583   29973 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1107 23:27:46.204593   29973 command_runner.go:130] > # stream_idle_timeout = ""
	I1107 23:27:46.204609   29973 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1107 23:27:46.204622   29973 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1107 23:27:46.204632   29973 command_runner.go:130] > # minutes.
	I1107 23:27:46.204666   29973 command_runner.go:130] > # stream_tls_cert = ""
	I1107 23:27:46.204676   29973 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1107 23:27:46.204682   29973 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1107 23:27:46.204686   29973 command_runner.go:130] > # stream_tls_key = ""
	I1107 23:27:46.204699   29973 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1107 23:27:46.204713   29973 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1107 23:27:46.204726   29973 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1107 23:27:46.204733   29973 command_runner.go:130] > # stream_tls_ca = ""
	I1107 23:27:46.204749   29973 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:27:46.204761   29973 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1107 23:27:46.204776   29973 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:27:46.204787   29973 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1107 23:27:46.204809   29973 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1107 23:27:46.204829   29973 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1107 23:27:46.204840   29973 command_runner.go:130] > [crio.runtime]
	I1107 23:27:46.204851   29973 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1107 23:27:46.204864   29973 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1107 23:27:46.204874   29973 command_runner.go:130] > # "nofile=1024:2048"
	I1107 23:27:46.204887   29973 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1107 23:27:46.204897   29973 command_runner.go:130] > # default_ulimits = [
	I1107 23:27:46.204904   29973 command_runner.go:130] > # ]
	I1107 23:27:46.204911   29973 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1107 23:27:46.204921   29973 command_runner.go:130] > # no_pivot = false
	I1107 23:27:46.204932   29973 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1107 23:27:46.204946   29973 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1107 23:27:46.204957   29973 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1107 23:27:46.204970   29973 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1107 23:27:46.204982   29973 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1107 23:27:46.204992   29973 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:27:46.205007   29973 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1107 23:27:46.205019   29973 command_runner.go:130] > # Cgroup setting for conmon
	I1107 23:27:46.205030   29973 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1107 23:27:46.205037   29973 command_runner.go:130] > conmon_cgroup = "pod"
	I1107 23:27:46.205049   29973 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1107 23:27:46.205061   29973 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1107 23:27:46.205075   29973 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:27:46.205085   29973 command_runner.go:130] > conmon_env = [
	I1107 23:27:46.205095   29973 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1107 23:27:46.205105   29973 command_runner.go:130] > ]
	I1107 23:27:46.205114   29973 command_runner.go:130] > # Additional environment variables to set for all the
	I1107 23:27:46.205125   29973 command_runner.go:130] > # containers. These are overridden if set in the
	I1107 23:27:46.205139   29973 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1107 23:27:46.205149   29973 command_runner.go:130] > # default_env = [
	I1107 23:27:46.205158   29973 command_runner.go:130] > # ]
	I1107 23:27:46.205167   29973 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1107 23:27:46.205177   29973 command_runner.go:130] > # selinux = false
	I1107 23:27:46.205191   29973 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1107 23:27:46.205206   29973 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1107 23:27:46.205220   29973 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1107 23:27:46.205229   29973 command_runner.go:130] > # seccomp_profile = ""
	I1107 23:27:46.205237   29973 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1107 23:27:46.205249   29973 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1107 23:27:46.205261   29973 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1107 23:27:46.205273   29973 command_runner.go:130] > # which might increase security.
	I1107 23:27:46.205284   29973 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1107 23:27:46.205298   29973 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1107 23:27:46.205312   29973 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1107 23:27:46.205326   29973 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1107 23:27:46.205340   29973 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1107 23:27:46.205349   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:27:46.205360   29973 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1107 23:27:46.205373   29973 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1107 23:27:46.205384   29973 command_runner.go:130] > # the cgroup blockio controller.
	I1107 23:27:46.205395   29973 command_runner.go:130] > # blockio_config_file = ""
	I1107 23:27:46.205410   29973 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1107 23:27:46.205419   29973 command_runner.go:130] > # irqbalance daemon.
	I1107 23:27:46.205428   29973 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1107 23:27:46.205443   29973 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1107 23:27:46.205455   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:27:46.205467   29973 command_runner.go:130] > # rdt_config_file = ""
	I1107 23:27:46.205480   29973 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1107 23:27:46.205490   29973 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1107 23:27:46.205501   29973 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1107 23:27:46.205510   29973 command_runner.go:130] > # separate_pull_cgroup = ""
	I1107 23:27:46.205524   29973 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1107 23:27:46.205538   29973 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1107 23:27:46.205545   29973 command_runner.go:130] > # will be added.
	I1107 23:27:46.205584   29973 command_runner.go:130] > # default_capabilities = [
	I1107 23:27:46.205595   29973 command_runner.go:130] > # 	"CHOWN",
	I1107 23:27:46.205602   29973 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1107 23:27:46.205611   29973 command_runner.go:130] > # 	"FSETID",
	I1107 23:27:46.205620   29973 command_runner.go:130] > # 	"FOWNER",
	I1107 23:27:46.205627   29973 command_runner.go:130] > # 	"SETGID",
	I1107 23:27:46.205638   29973 command_runner.go:130] > # 	"SETUID",
	I1107 23:27:46.205645   29973 command_runner.go:130] > # 	"SETPCAP",
	I1107 23:27:46.205655   29973 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1107 23:27:46.205661   29973 command_runner.go:130] > # 	"KILL",
	I1107 23:27:46.205670   29973 command_runner.go:130] > # ]
	I1107 23:27:46.205681   29973 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1107 23:27:46.205691   29973 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:27:46.205700   29973 command_runner.go:130] > # default_sysctls = [
	I1107 23:27:46.205707   29973 command_runner.go:130] > # ]
	I1107 23:27:46.205719   29973 command_runner.go:130] > # List of devices on the host that a
	I1107 23:27:46.205733   29973 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1107 23:27:46.205743   29973 command_runner.go:130] > # allowed_devices = [
	I1107 23:27:46.205753   29973 command_runner.go:130] > # 	"/dev/fuse",
	I1107 23:27:46.205761   29973 command_runner.go:130] > # ]
	I1107 23:27:46.205773   29973 command_runner.go:130] > # List of additional devices. specified as
	I1107 23:27:46.205783   29973 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1107 23:27:46.205795   29973 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1107 23:27:46.205820   29973 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:27:46.205831   29973 command_runner.go:130] > # additional_devices = [
	I1107 23:27:46.205837   29973 command_runner.go:130] > # ]
	I1107 23:27:46.205849   29973 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1107 23:27:46.205857   29973 command_runner.go:130] > # cdi_spec_dirs = [
	I1107 23:27:46.205866   29973 command_runner.go:130] > # 	"/etc/cdi",
	I1107 23:27:46.205871   29973 command_runner.go:130] > # 	"/var/run/cdi",
	I1107 23:27:46.205880   29973 command_runner.go:130] > # ]
	I1107 23:27:46.205891   29973 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1107 23:27:46.205905   29973 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1107 23:27:46.205915   29973 command_runner.go:130] > # Defaults to false.
	I1107 23:27:46.205927   29973 command_runner.go:130] > # device_ownership_from_security_context = false
	I1107 23:27:46.205940   29973 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1107 23:27:46.205951   29973 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1107 23:27:46.205958   29973 command_runner.go:130] > # hooks_dir = [
	I1107 23:27:46.205966   29973 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1107 23:27:46.205975   29973 command_runner.go:130] > # ]
	I1107 23:27:46.205989   29973 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1107 23:27:46.206007   29973 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1107 23:27:46.206019   29973 command_runner.go:130] > # its default mounts from the following two files:
	I1107 23:27:46.206027   29973 command_runner.go:130] > #
	I1107 23:27:46.206036   29973 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1107 23:27:46.206046   29973 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1107 23:27:46.206060   29973 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1107 23:27:46.206070   29973 command_runner.go:130] > #
	I1107 23:27:46.206080   29973 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1107 23:27:46.206096   29973 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1107 23:27:46.206110   29973 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1107 23:27:46.206120   29973 command_runner.go:130] > #      only add mounts it finds in this file.
	I1107 23:27:46.206124   29973 command_runner.go:130] > #
	I1107 23:27:46.206128   29973 command_runner.go:130] > # default_mounts_file = ""
	I1107 23:27:46.206141   29973 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1107 23:27:46.206156   29973 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1107 23:27:46.206166   29973 command_runner.go:130] > pids_limit = 1024
	I1107 23:27:46.206180   29973 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1107 23:27:46.206193   29973 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1107 23:27:46.206205   29973 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1107 23:27:46.206217   29973 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1107 23:27:46.206227   29973 command_runner.go:130] > # log_size_max = -1
	I1107 23:27:46.206239   29973 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1107 23:27:46.206251   29973 command_runner.go:130] > # log_to_journald = false
	I1107 23:27:46.206265   29973 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1107 23:27:46.206276   29973 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1107 23:27:46.206288   29973 command_runner.go:130] > # Path to directory for container attach sockets.
	I1107 23:27:46.206296   29973 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1107 23:27:46.206308   29973 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1107 23:27:46.206319   29973 command_runner.go:130] > # bind_mount_prefix = ""
	I1107 23:27:46.206331   29973 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1107 23:27:46.206342   29973 command_runner.go:130] > # read_only = false
	I1107 23:27:46.206355   29973 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1107 23:27:46.206368   29973 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1107 23:27:46.206377   29973 command_runner.go:130] > # live configuration reload.
	I1107 23:27:46.206383   29973 command_runner.go:130] > # log_level = "info"
	I1107 23:27:46.206392   29973 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1107 23:27:46.206405   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:27:46.206415   29973 command_runner.go:130] > # log_filter = ""
	I1107 23:27:46.206428   29973 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1107 23:27:46.206441   29973 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1107 23:27:46.206451   29973 command_runner.go:130] > # separated by comma.
	I1107 23:27:46.206461   29973 command_runner.go:130] > # uid_mappings = ""
	I1107 23:27:46.206470   29973 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1107 23:27:46.206484   29973 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1107 23:27:46.206495   29973 command_runner.go:130] > # separated by comma.
	I1107 23:27:46.206506   29973 command_runner.go:130] > # gid_mappings = ""
	I1107 23:27:46.206519   29973 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1107 23:27:46.206532   29973 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:27:46.206545   29973 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:27:46.206553   29973 command_runner.go:130] > # minimum_mappable_uid = -1
	I1107 23:27:46.206584   29973 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1107 23:27:46.206600   29973 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:27:46.206611   29973 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:27:46.206621   29973 command_runner.go:130] > # minimum_mappable_gid = -1
	I1107 23:27:46.206634   29973 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1107 23:27:46.206643   29973 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1107 23:27:46.206652   29973 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1107 23:27:46.206663   29973 command_runner.go:130] > # ctr_stop_timeout = 30
	I1107 23:27:46.206676   29973 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1107 23:27:46.206690   29973 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1107 23:27:46.206701   29973 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1107 23:27:46.206713   29973 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1107 23:27:46.206719   29973 command_runner.go:130] > drop_infra_ctr = false
	I1107 23:27:46.206729   29973 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1107 23:27:46.206738   29973 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1107 23:27:46.206754   29973 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1107 23:27:46.206765   29973 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1107 23:27:46.206778   29973 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1107 23:27:46.206790   29973 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1107 23:27:46.206800   29973 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1107 23:27:46.206811   29973 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1107 23:27:46.206816   29973 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1107 23:27:46.206830   29973 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1107 23:27:46.206845   29973 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1107 23:27:46.206858   29973 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1107 23:27:46.206868   29973 command_runner.go:130] > # default_runtime = "runc"
	I1107 23:27:46.206881   29973 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1107 23:27:46.206894   29973 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1107 23:27:46.206907   29973 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1107 23:27:46.206919   29973 command_runner.go:130] > # creation as a file is not desired either.
	I1107 23:27:46.206933   29973 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1107 23:27:46.206945   29973 command_runner.go:130] > # the hostname is being managed dynamically.
	I1107 23:27:46.206957   29973 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1107 23:27:46.206964   29973 command_runner.go:130] > # ]
	I1107 23:27:46.206977   29973 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1107 23:27:46.206986   29973 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1107 23:27:46.207001   29973 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1107 23:27:46.207015   29973 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1107 23:27:46.207021   29973 command_runner.go:130] > #
	I1107 23:27:46.207029   29973 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1107 23:27:46.207041   29973 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1107 23:27:46.207051   29973 command_runner.go:130] > #  runtime_type = "oci"
	I1107 23:27:46.207062   29973 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1107 23:27:46.207069   29973 command_runner.go:130] > #  privileged_without_host_devices = false
	I1107 23:27:46.207074   29973 command_runner.go:130] > #  allowed_annotations = []
	I1107 23:27:46.207084   29973 command_runner.go:130] > # Where:
	I1107 23:27:46.207097   29973 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1107 23:27:46.207111   29973 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1107 23:27:46.207124   29973 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1107 23:27:46.207135   29973 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1107 23:27:46.207144   29973 command_runner.go:130] > #   in $PATH.
	I1107 23:27:46.207152   29973 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1107 23:27:46.207161   29973 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1107 23:27:46.207174   29973 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1107 23:27:46.207184   29973 command_runner.go:130] > #   state.
	I1107 23:27:46.207197   29973 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1107 23:27:46.207210   29973 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1107 23:27:46.207224   29973 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1107 23:27:46.207233   29973 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1107 23:27:46.207241   29973 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1107 23:27:46.207252   29973 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1107 23:27:46.207264   29973 command_runner.go:130] > #   The currently recognized values are:
	I1107 23:27:46.207278   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1107 23:27:46.207293   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1107 23:27:46.207305   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1107 23:27:46.207318   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1107 23:27:46.207326   29973 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1107 23:27:46.207357   29973 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1107 23:27:46.207371   29973 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1107 23:27:46.207385   29973 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1107 23:27:46.207397   29973 command_runner.go:130] > #   should be moved to the container's cgroup
	I1107 23:27:46.207407   29973 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1107 23:27:46.207412   29973 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1107 23:27:46.207420   29973 command_runner.go:130] > runtime_type = "oci"
	I1107 23:27:46.207428   29973 command_runner.go:130] > runtime_root = "/run/runc"
	I1107 23:27:46.207438   29973 command_runner.go:130] > runtime_config_path = ""
	I1107 23:27:46.207446   29973 command_runner.go:130] > monitor_path = ""
	I1107 23:27:46.207454   29973 command_runner.go:130] > monitor_cgroup = ""
	I1107 23:27:46.207465   29973 command_runner.go:130] > monitor_exec_cgroup = ""
	I1107 23:27:46.207478   29973 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1107 23:27:46.207488   29973 command_runner.go:130] > # running containers
	I1107 23:27:46.207498   29973 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1107 23:27:46.207504   29973 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1107 23:27:46.207554   29973 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1107 23:27:46.207567   29973 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1107 23:27:46.207579   29973 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1107 23:27:46.207587   29973 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1107 23:27:46.207592   29973 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1107 23:27:46.207600   29973 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1107 23:27:46.207608   29973 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1107 23:27:46.207616   29973 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1107 23:27:46.207631   29973 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1107 23:27:46.207642   29973 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1107 23:27:46.207656   29973 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1107 23:27:46.207671   29973 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1107 23:27:46.207682   29973 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1107 23:27:46.207695   29973 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1107 23:27:46.207711   29973 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1107 23:27:46.207727   29973 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1107 23:27:46.207740   29973 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1107 23:27:46.207754   29973 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1107 23:27:46.207761   29973 command_runner.go:130] > # Example:
	I1107 23:27:46.207767   29973 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1107 23:27:46.207779   29973 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1107 23:27:46.207792   29973 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1107 23:27:46.207801   29973 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1107 23:27:46.207810   29973 command_runner.go:130] > # cpuset = 0
	I1107 23:27:46.207818   29973 command_runner.go:130] > # cpushares = "0-1"
	I1107 23:27:46.207827   29973 command_runner.go:130] > # Where:
	I1107 23:27:46.207835   29973 command_runner.go:130] > # The workload name is workload-type.
	I1107 23:27:46.207846   29973 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1107 23:27:46.207854   29973 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1107 23:27:46.207868   29973 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1107 23:27:46.207885   29973 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1107 23:27:46.207897   29973 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1107 23:27:46.207903   29973 command_runner.go:130] > # 
	I1107 23:27:46.207917   29973 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1107 23:27:46.207926   29973 command_runner.go:130] > #
	I1107 23:27:46.207932   29973 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1107 23:27:46.207941   29973 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1107 23:27:46.207955   29973 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1107 23:27:46.207969   29973 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1107 23:27:46.207982   29973 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1107 23:27:46.207991   29973 command_runner.go:130] > [crio.image]
	I1107 23:27:46.208006   29973 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1107 23:27:46.208017   29973 command_runner.go:130] > # default_transport = "docker://"
	I1107 23:27:46.208031   29973 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1107 23:27:46.208046   29973 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:27:46.208057   29973 command_runner.go:130] > # global_auth_file = ""
	I1107 23:27:46.208069   29973 command_runner.go:130] > # The image used to instantiate infra containers.
	I1107 23:27:46.208080   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:27:46.208091   29973 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1107 23:27:46.208101   29973 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1107 23:27:46.208109   29973 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:27:46.208122   29973 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:27:46.208133   29973 command_runner.go:130] > # pause_image_auth_file = ""
	I1107 23:27:46.208144   29973 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1107 23:27:46.208158   29973 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1107 23:27:46.208175   29973 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1107 23:27:46.208185   29973 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1107 23:27:46.208190   29973 command_runner.go:130] > # pause_command = "/pause"
	I1107 23:27:46.208197   29973 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1107 23:27:46.208204   29973 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1107 23:27:46.208213   29973 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1107 23:27:46.208226   29973 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1107 23:27:46.208239   29973 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1107 23:27:46.208249   29973 command_runner.go:130] > # signature_policy = ""
	I1107 23:27:46.208262   29973 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1107 23:27:46.208276   29973 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1107 23:27:46.208285   29973 command_runner.go:130] > # changing them here.
	I1107 23:27:46.208293   29973 command_runner.go:130] > # insecure_registries = [
	I1107 23:27:46.208296   29973 command_runner.go:130] > # ]
	I1107 23:27:46.208302   29973 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1107 23:27:46.208310   29973 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1107 23:27:46.208314   29973 command_runner.go:130] > # image_volumes = "mkdir"
	I1107 23:27:46.208320   29973 command_runner.go:130] > # Temporary directory to use for storing big files
	I1107 23:27:46.208324   29973 command_runner.go:130] > # big_files_temporary_dir = ""
	I1107 23:27:46.208332   29973 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1107 23:27:46.208339   29973 command_runner.go:130] > # CNI plugins.
	I1107 23:27:46.208342   29973 command_runner.go:130] > [crio.network]
	I1107 23:27:46.208348   29973 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1107 23:27:46.208356   29973 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1107 23:27:46.208363   29973 command_runner.go:130] > # cni_default_network = ""
	I1107 23:27:46.208376   29973 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1107 23:27:46.208385   29973 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1107 23:27:46.208397   29973 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1107 23:27:46.208407   29973 command_runner.go:130] > # plugin_dirs = [
	I1107 23:27:46.208414   29973 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1107 23:27:46.208422   29973 command_runner.go:130] > # ]
	I1107 23:27:46.208432   29973 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1107 23:27:46.208439   29973 command_runner.go:130] > [crio.metrics]
	I1107 23:27:46.208444   29973 command_runner.go:130] > # Globally enable or disable metrics support.
	I1107 23:27:46.208450   29973 command_runner.go:130] > enable_metrics = true
	I1107 23:27:46.208455   29973 command_runner.go:130] > # Specify enabled metrics collectors.
	I1107 23:27:46.208462   29973 command_runner.go:130] > # Per default all metrics are enabled.
	I1107 23:27:46.208468   29973 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1107 23:27:46.208476   29973 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1107 23:27:46.208482   29973 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1107 23:27:46.208489   29973 command_runner.go:130] > # metrics_collectors = [
	I1107 23:27:46.208492   29973 command_runner.go:130] > # 	"operations",
	I1107 23:27:46.208497   29973 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1107 23:27:46.208504   29973 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1107 23:27:46.208508   29973 command_runner.go:130] > # 	"operations_errors",
	I1107 23:27:46.208515   29973 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1107 23:27:46.208519   29973 command_runner.go:130] > # 	"image_pulls_by_name",
	I1107 23:27:46.208527   29973 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1107 23:27:46.208531   29973 command_runner.go:130] > # 	"image_pulls_failures",
	I1107 23:27:46.208537   29973 command_runner.go:130] > # 	"image_pulls_successes",
	I1107 23:27:46.208542   29973 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1107 23:27:46.208549   29973 command_runner.go:130] > # 	"image_layer_reuse",
	I1107 23:27:46.208554   29973 command_runner.go:130] > # 	"containers_oom_total",
	I1107 23:27:46.208560   29973 command_runner.go:130] > # 	"containers_oom",
	I1107 23:27:46.208564   29973 command_runner.go:130] > # 	"processes_defunct",
	I1107 23:27:46.208571   29973 command_runner.go:130] > # 	"operations_total",
	I1107 23:27:46.208575   29973 command_runner.go:130] > # 	"operations_latency_seconds",
	I1107 23:27:46.208580   29973 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1107 23:27:46.208589   29973 command_runner.go:130] > # 	"operations_errors_total",
	I1107 23:27:46.208600   29973 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1107 23:27:46.208610   29973 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1107 23:27:46.208614   29973 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1107 23:27:46.208619   29973 command_runner.go:130] > # 	"image_pulls_success_total",
	I1107 23:27:46.208623   29973 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1107 23:27:46.208630   29973 command_runner.go:130] > # 	"containers_oom_count_total",
	I1107 23:27:46.208634   29973 command_runner.go:130] > # ]
	I1107 23:27:46.208641   29973 command_runner.go:130] > # The port on which the metrics server will listen.
	I1107 23:27:46.208645   29973 command_runner.go:130] > # metrics_port = 9090
	I1107 23:27:46.208653   29973 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1107 23:27:46.208657   29973 command_runner.go:130] > # metrics_socket = ""
	I1107 23:27:46.208664   29973 command_runner.go:130] > # The certificate for the secure metrics server.
	I1107 23:27:46.208671   29973 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1107 23:27:46.208679   29973 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1107 23:27:46.208683   29973 command_runner.go:130] > # certificate on any modification event.
	I1107 23:27:46.208688   29973 command_runner.go:130] > # metrics_cert = ""
	I1107 23:27:46.208696   29973 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1107 23:27:46.208703   29973 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1107 23:27:46.208707   29973 command_runner.go:130] > # metrics_key = ""
	I1107 23:27:46.208712   29973 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1107 23:27:46.208718   29973 command_runner.go:130] > [crio.tracing]
	I1107 23:27:46.208724   29973 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1107 23:27:46.208730   29973 command_runner.go:130] > # enable_tracing = false
	I1107 23:27:46.208735   29973 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1107 23:27:46.208742   29973 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1107 23:27:46.208747   29973 command_runner.go:130] > # Number of samples to collect per million spans.
	I1107 23:27:46.208754   29973 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1107 23:27:46.208760   29973 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1107 23:27:46.208768   29973 command_runner.go:130] > [crio.stats]
	I1107 23:27:46.208774   29973 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1107 23:27:46.208782   29973 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1107 23:27:46.208786   29973 command_runner.go:130] > # stats_collection_period = 0
	I1107 23:27:46.208808   29973 command_runner.go:130] ! time="2023-11-07 23:27:46.189703131Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1107 23:27:46.208837   29973 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1107 23:27:46.208911   29973 cni.go:84] Creating CNI manager for ""
	I1107 23:27:46.208924   29973 cni.go:136] 2 nodes found, recommending kindnet
	I1107 23:27:46.208934   29973 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:27:46.208959   29973 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553062 NodeName:multinode-553062-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:27:46.209081   29973 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553062-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:27:46.209128   29973 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553062-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:27:46.209173   29973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:27:46.217503   29973 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1107 23:27:46.217708   29973 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1107 23:27:46.217757   29973 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1107 23:27:46.226592   29973 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1107 23:27:46.226622   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1107 23:27:46.226671   29973 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1107 23:27:46.226712   29973 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1107 23:27:46.226676   29973 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1107 23:27:46.230817   29973 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1107 23:27:46.234225   29973 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1107 23:27:46.234252   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1107 23:27:47.231585   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1107 23:27:47.231669   29973 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1107 23:27:47.236253   29973 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1107 23:27:47.236350   29973 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1107 23:27:47.236376   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1107 23:27:47.877268   29973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:27:47.891420   29973 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.28.3/kubelet -> /var/lib/minikube/binaries/v1.28.3/kubelet
	I1107 23:27:47.891495   29973 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet
	I1107 23:27:47.895688   29973 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1107 23:27:47.896063   29973 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubelet': No such file or directory
	I1107 23:27:47.896107   29973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.28.3/kubelet --> /var/lib/minikube/binaries/v1.28.3/kubelet (110780416 bytes)
	I1107 23:27:48.444694   29973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1107 23:27:48.452658   29973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1107 23:27:48.468679   29973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:27:48.484979   29973 ssh_runner.go:195] Run: grep 192.168.39.246	control-plane.minikube.internal$ /etc/hosts
	I1107 23:27:48.488591   29973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:27:48.500691   29973 host.go:66] Checking if "multinode-553062" exists ...
	I1107 23:27:48.500918   29973 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:27:48.501149   29973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:27:48.501185   29973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:27:48.514982   29973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39253
	I1107 23:27:48.515403   29973 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:27:48.515824   29973 main.go:141] libmachine: Using API Version  1
	I1107 23:27:48.515845   29973 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:27:48.516155   29973 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:27:48.516344   29973 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:27:48.516494   29973 start.go:304] JoinCluster: &{Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:27:48.516597   29973 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1107 23:27:48.516614   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:27:48.519304   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:27:48.519648   29973 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:27:48.519673   29973 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:27:48.519833   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:27:48.519981   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:27:48.520124   29973 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:27:48.520236   29973 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:27:48.697467   29973 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token igspkt.nee3bzghbgtm1rgu --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1107 23:27:48.701367   29973 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:27:48.701408   29973 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token igspkt.nee3bzghbgtm1rgu --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553062-m02"
	I1107 23:27:48.746544   29973 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 23:27:48.891376   29973 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 23:27:48.891410   29973 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 23:27:48.933555   29973 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:27:48.933579   29973 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:27:48.933585   29973 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 23:27:49.061055   29973 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1107 23:27:51.081977   29973 command_runner.go:130] > This node has joined the cluster:
	I1107 23:27:51.082010   29973 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1107 23:27:51.082020   29973 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1107 23:27:51.082029   29973 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1107 23:27:51.083538   29973 command_runner.go:130] ! W1107 23:27:48.736885     825 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1107 23:27:51.083563   29973 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:27:51.083598   29973 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token igspkt.nee3bzghbgtm1rgu --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553062-m02": (2.382175727s)
	I1107 23:27:51.083624   29973 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1107 23:27:51.335823   29973 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1107 23:27:51.335861   29973 start.go:306] JoinCluster complete in 2.81936673s
	I1107 23:27:51.335874   29973 cni.go:84] Creating CNI manager for ""
	I1107 23:27:51.335881   29973 cni.go:136] 2 nodes found, recommending kindnet
	I1107 23:27:51.335925   29973 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:27:51.341373   29973 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 23:27:51.341401   29973 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1107 23:27:51.341411   29973 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1107 23:27:51.341422   29973 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:27:51.341431   29973 command_runner.go:130] > Access: 2023-11-07 23:26:24.870432823 +0000
	I1107 23:27:51.341441   29973 command_runner.go:130] > Modify: 2023-11-07 07:42:50.000000000 +0000
	I1107 23:27:51.341454   29973 command_runner.go:130] > Change: 2023-11-07 23:26:23.025432823 +0000
	I1107 23:27:51.341464   29973 command_runner.go:130] >  Birth: -
	I1107 23:27:51.341521   29973 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:27:51.341535   29973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:27:51.360966   29973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:27:51.712021   29973 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:27:51.717878   29973 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:27:51.722174   29973 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1107 23:27:51.736400   29973 command_runner.go:130] > daemonset.apps/kindnet configured
	I1107 23:27:51.739965   29973 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:27:51.740200   29973 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:27:51.740525   29973 round_trippers.go:463] GET https://192.168.39.246:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:27:51.740542   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:51.740549   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:51.740555   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:51.744664   29973 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:27:51.744684   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:51.744693   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:51.744701   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:51.744722   29973 round_trippers.go:580]     Content-Length: 291
	I1107 23:27:51.744735   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:51 GMT
	I1107 23:27:51.744744   29973 round_trippers.go:580]     Audit-Id: ccf20569-4e73-42e0-ab52-a87cf2f6614f
	I1107 23:27:51.744757   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:51.744769   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:51.744869   29973 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"99a4298f-5274-4bac-956d-86f8091a0b82","resourceVersion":"415","creationTimestamp":"2023-11-07T23:26:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1107 23:27:51.745000   29973 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553062" context rescaled to 1 replicas
	I1107 23:27:51.745034   29973 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:27:51.747626   29973 out.go:177] * Verifying Kubernetes components...
	I1107 23:27:51.749052   29973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:27:51.771577   29973 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:27:51.771797   29973 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:27:51.772017   29973 node_ready.go:35] waiting up to 6m0s for node "multinode-553062-m02" to be "Ready" ...
	I1107 23:27:51.772088   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:51.772098   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:51.772105   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:51.772113   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:51.774773   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:51.774792   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:51.774801   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:51.774809   29973 round_trippers.go:580]     Content-Length: 3531
	I1107 23:27:51.774818   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:51 GMT
	I1107 23:27:51.774823   29973 round_trippers.go:580]     Audit-Id: b71e3769-d1d9-426e-a990-610667bf73bb
	I1107 23:27:51.774831   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:51.774836   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:51.774845   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:51.774916   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"464","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1107 23:27:51.775197   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:51.775215   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:51.775225   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:51.775234   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:51.778147   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:51.778171   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:51.778180   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:51.778188   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:51.778197   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:51.778208   29973 round_trippers.go:580]     Content-Length: 3531
	I1107 23:27:51.778220   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:51 GMT
	I1107 23:27:51.778230   29973 round_trippers.go:580]     Audit-Id: d4189fc7-ed2b-4112-99a5-bab448b88092
	I1107 23:27:51.778242   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:51.778324   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"464","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1107 23:27:52.278963   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:52.278986   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:52.278993   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:52.279001   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:52.282393   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:52.282413   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:52.282420   29973 round_trippers.go:580]     Audit-Id: 226fc5a8-1cd6-49c0-8cc9-a5f99c7501b4
	I1107 23:27:52.282425   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:52.282430   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:52.282436   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:52.282444   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:52.282451   29973 round_trippers.go:580]     Content-Length: 3531
	I1107 23:27:52.282464   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:52 GMT
	I1107 23:27:52.282640   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"464","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1107 23:27:52.779311   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:52.779340   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:52.779350   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:52.779359   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:52.782402   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:52.782420   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:52.782426   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:52.782434   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:52.782439   29973 round_trippers.go:580]     Content-Length: 3531
	I1107 23:27:52.782444   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:52 GMT
	I1107 23:27:52.782449   29973 round_trippers.go:580]     Audit-Id: e31d1ec0-a01d-4f34-81d0-12f0ca3a8672
	I1107 23:27:52.782458   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:52.782463   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:52.782534   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"464","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1107 23:27:53.278799   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:53.278825   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:53.278833   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:53.278839   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:53.281638   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:53.281661   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:53.281668   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:53.281673   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:53.281679   29973 round_trippers.go:580]     Content-Length: 3531
	I1107 23:27:53.281684   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:53 GMT
	I1107 23:27:53.281693   29973 round_trippers.go:580]     Audit-Id: 0e587395-209e-4cbc-bc05-41f4ceaa5749
	I1107 23:27:53.281698   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:53.281703   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:53.281774   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"464","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1107 23:27:53.778726   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:53.778749   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:53.778760   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:53.778769   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:53.782054   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:53.782075   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:53.782085   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:53.782094   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:53.782103   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:53.782110   29973 round_trippers.go:580]     Content-Length: 3531
	I1107 23:27:53.782123   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:53 GMT
	I1107 23:27:53.782134   29973 round_trippers.go:580]     Audit-Id: 90a2687c-91d7-4a9d-aa22-8358d0edc768
	I1107 23:27:53.782141   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:53.782259   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"464","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1107 23:27:53.782478   29973 node_ready.go:58] node "multinode-553062-m02" has status "Ready":"False"
	I1107 23:27:54.279233   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:54.279254   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:54.279262   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:54.279268   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:54.282247   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:54.282275   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:54.282285   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:54.282293   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:54.282301   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:54.282308   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:54.282321   29973 round_trippers.go:580]     Content-Length: 3531
	I1107 23:27:54.282326   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:54 GMT
	I1107 23:27:54.282332   29973 round_trippers.go:580]     Audit-Id: 0b9454c6-aad8-4657-8a4c-40a5167f5e23
	I1107 23:27:54.282415   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"464","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 2507 chars]
	I1107 23:27:54.778914   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:54.778940   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:54.778949   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:54.778955   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:54.781912   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:54.781937   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:54.781946   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:54.781963   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:54 GMT
	I1107 23:27:54.781968   29973 round_trippers.go:580]     Audit-Id: ea0d14d2-aab5-4b8e-98e3-fec2a43041f1
	I1107 23:27:54.781974   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:54.781979   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:54.781984   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:54.781992   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:54.782065   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:55.279691   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:55.279720   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:55.279733   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:55.279742   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:55.282738   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:55.282763   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:55.282774   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:55 GMT
	I1107 23:27:55.282782   29973 round_trippers.go:580]     Audit-Id: 8001c11a-5d9e-4abd-afff-adde239317da
	I1107 23:27:55.282790   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:55.282797   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:55.282810   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:55.282818   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:55.282826   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:55.282921   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:55.779495   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:55.779528   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:55.779542   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:55.779551   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:55.782399   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:55.782426   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:55.782435   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:55 GMT
	I1107 23:27:55.782440   29973 round_trippers.go:580]     Audit-Id: a5ce986a-77d2-450d-89bb-c4aa17705368
	I1107 23:27:55.782445   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:55.782450   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:55.782455   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:55.782464   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:55.782472   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:55.782641   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:55.782899   29973 node_ready.go:58] node "multinode-553062-m02" has status "Ready":"False"
	I1107 23:27:56.279434   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:56.279455   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:56.279464   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:56.279470   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:56.282304   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:56.282322   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:56.282330   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:56.282337   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:56.282345   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:56.282356   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:56.282366   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:56.282374   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:56 GMT
	I1107 23:27:56.282384   29973 round_trippers.go:580]     Audit-Id: 9f8ad610-4cd0-424f-8fbc-de02f7f8a422
	I1107 23:27:56.282547   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:56.779639   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:56.779663   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:56.779671   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:56.779676   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:56.782631   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:56.782651   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:56.782663   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:56.782669   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:56.782677   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:56 GMT
	I1107 23:27:56.782683   29973 round_trippers.go:580]     Audit-Id: 6efdac3a-2ed7-4134-bdc2-1cf24b199d50
	I1107 23:27:56.782692   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:56.782697   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:56.782702   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:56.782824   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:57.279418   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:57.279455   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:57.279467   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:57.279490   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:57.281804   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:57.281830   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:57.281841   29973 round_trippers.go:580]     Audit-Id: 2bd6047d-d112-4c5e-8b91-f64cdadab006
	I1107 23:27:57.281851   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:57.281864   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:57.281872   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:57.281892   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:57.281906   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:57.281915   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:57 GMT
	I1107 23:27:57.282021   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:57.779573   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:57.779597   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:57.779608   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:57.779614   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:57.782901   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:57.782923   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:57.782932   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:57.782939   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:57.782947   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:57.782956   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:57.782965   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:57 GMT
	I1107 23:27:57.782975   29973 round_trippers.go:580]     Audit-Id: a753b87f-b375-4483-bb91-1b99f7b9002a
	I1107 23:27:57.782986   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:57.783129   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:57.783349   29973 node_ready.go:58] node "multinode-553062-m02" has status "Ready":"False"
	I1107 23:27:58.279758   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:58.279787   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:58.279799   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:58.279809   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:58.283413   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:58.283439   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:58.283447   29973 round_trippers.go:580]     Audit-Id: fd293d39-68f0-4ae5-a3b5-b373de220136
	I1107 23:27:58.283453   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:58.283458   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:58.283466   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:58.283471   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:58.283479   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:58.283484   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:58 GMT
	I1107 23:27:58.283624   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:58.778720   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:58.778742   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:58.778750   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:58.778756   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:58.781851   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:58.781881   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:58.781892   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:58.781901   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:58 GMT
	I1107 23:27:58.781907   29973 round_trippers.go:580]     Audit-Id: 096aab87-a96a-49fd-b1d8-30b25b6c817c
	I1107 23:27:58.781915   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:58.781921   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:58.781940   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:58.781948   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:58.781999   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:59.279638   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:59.279667   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:59.279683   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:59.279691   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:59.283340   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:27:59.283366   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:59.283377   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:59.283385   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:59.283394   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:59.283402   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:59.283410   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:59 GMT
	I1107 23:27:59.283421   29973 round_trippers.go:580]     Audit-Id: 4ac7d1c5-434f-4662-a4a4-8cac19d7c82f
	I1107 23:27:59.283432   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:59.283535   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:27:59.778850   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:27:59.778879   29973 round_trippers.go:469] Request Headers:
	I1107 23:27:59.778887   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:27:59.778893   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:27:59.781829   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:27:59.781852   29973 round_trippers.go:577] Response Headers:
	I1107 23:27:59.781860   29973 round_trippers.go:580]     Audit-Id: eff6025c-40e5-45ff-a59b-04c7943cfb69
	I1107 23:27:59.781865   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:27:59.781871   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:27:59.781876   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:27:59.781881   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:27:59.781887   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:27:59.781892   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:27:59 GMT
	I1107 23:27:59.781997   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:28:00.279609   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:28:00.279638   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:00.279654   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:00.279662   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:00.282550   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:00.282573   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:00.282580   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:00.282585   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:00.282590   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:28:00.282595   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:00 GMT
	I1107 23:28:00.282600   29973 round_trippers.go:580]     Audit-Id: ec0192b4-3853-4f32-8be8-b187ff777257
	I1107 23:28:00.282605   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:00.282610   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:00.282792   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:28:00.283085   29973 node_ready.go:58] node "multinode-553062-m02" has status "Ready":"False"
	I1107 23:28:00.779194   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:28:00.779215   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:00.779223   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:00.779229   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:00.782479   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:28:00.782499   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:00.782509   29973 round_trippers.go:580]     Content-Length: 3640
	I1107 23:28:00.782515   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:00 GMT
	I1107 23:28:00.782523   29973 round_trippers.go:580]     Audit-Id: 2a493137-39dc-49b5-a6ac-59e3f580f8e4
	I1107 23:28:00.782532   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:00.782555   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:00.782568   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:00.782577   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:00.782661   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"472","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2616 chars]
	I1107 23:28:01.279511   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:28:01.279538   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.279551   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.279560   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.282403   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:01.282427   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.282437   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.282446   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.282454   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.282463   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.282490   29973 round_trippers.go:580]     Content-Length: 3909
	I1107 23:28:01.282503   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.282512   29973 round_trippers.go:580]     Audit-Id: a029fcc3-f3c5-4972-aeee-6bcca3330088
	I1107 23:28:01.282616   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"491","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2885 chars]
	I1107 23:28:01.779513   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:28:01.779536   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.779544   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.779550   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.782650   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:28:01.782668   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.782674   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.782681   29973 round_trippers.go:580]     Content-Length: 3726
	I1107 23:28:01.782690   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.782707   29973 round_trippers.go:580]     Audit-Id: f9a2c6dd-641e-4d71-9bee-67124825103b
	I1107 23:28:01.782716   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.782726   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.782732   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.782824   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"496","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2702 chars]
	I1107 23:28:01.783088   29973 node_ready.go:49] node "multinode-553062-m02" has status "Ready":"True"
	I1107 23:28:01.783104   29973 node_ready.go:38] duration metric: took 10.011073423s waiting for node "multinode-553062-m02" to be "Ready" ...
	I1107 23:28:01.783112   29973 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:28:01.783194   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:28:01.783204   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.783214   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.783228   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.786720   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:28:01.786741   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.786748   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.786753   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.786759   29973 round_trippers.go:580]     Audit-Id: 10695b64-dddb-441b-944a-7b103b7de091
	I1107 23:28:01.786781   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.786788   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.786793   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.788018   29973 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"411","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67370 chars]
	I1107 23:28:01.790086   29973 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:01.790151   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:28:01.790159   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.790166   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.790172   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.792191   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:01.792209   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.792219   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.792224   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.792229   29973 round_trippers.go:580]     Audit-Id: 23206e35-f0e7-421d-8cec-9fa5763edf8b
	I1107 23:28:01.792234   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.792239   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.792244   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.792510   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"411","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1107 23:28:01.792943   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:28:01.792956   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.792962   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.792968   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.795042   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:01.795061   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.795070   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.795078   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.795085   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.795093   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.795107   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.795114   29973 round_trippers.go:580]     Audit-Id: af698c61-641f-4274-b487-b0f8103f7e20
	I1107 23:28:01.795455   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:28:01.795861   29973 pod_ready.go:92] pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:01.795883   29973 pod_ready.go:81] duration metric: took 5.775664ms waiting for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:01.795897   29973 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:01.795967   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553062
	I1107 23:28:01.795976   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.795982   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.795988   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.798724   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:01.798740   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.798749   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.798758   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.798767   29973 round_trippers.go:580]     Audit-Id: 257ad521-3b35-484c-80e8-df504a6dd6e9
	I1107 23:28:01.798774   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.798779   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.798784   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.798897   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553062","namespace":"kube-system","uid":"3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1","resourceVersion":"405","creationTimestamp":"2023-11-07T23:26:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.246:2379","kubernetes.io/config.hash":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.mirror":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.seen":"2023-11-07T23:26:48.362630200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1107 23:28:01.799242   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:28:01.799254   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.799260   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.799266   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.801696   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:01.801711   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.801720   29973 round_trippers.go:580]     Audit-Id: 87f0f35f-7f13-49e9-8449-0c94ef7feb6f
	I1107 23:28:01.801731   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.801738   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.801746   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.801751   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.801756   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.802146   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:28:01.802420   29973 pod_ready.go:92] pod "etcd-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:01.802434   29973 pod_ready.go:81] duration metric: took 6.526713ms waiting for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:01.802445   29973 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:01.802504   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553062
	I1107 23:28:01.802513   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.802519   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.802525   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.804588   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:01.804607   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.804616   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.804624   29973 round_trippers.go:580]     Audit-Id: 5d7001df-37b2-4332-8f4c-34079be9d07f
	I1107 23:28:01.804642   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.804655   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.804664   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.804672   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.804846   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553062","namespace":"kube-system","uid":"30896fa0-3d8f-4861-bdf5-ad94796ad097","resourceVersion":"406","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.246:8443","kubernetes.io/config.hash":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.mirror":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.seen":"2023-11-07T23:26:57.103263110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1107 23:28:01.805299   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:28:01.805315   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.805322   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.805327   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.807404   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:01.807425   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.807434   29973 round_trippers.go:580]     Audit-Id: 49741660-55b6-43c1-adab-46e83ab67f85
	I1107 23:28:01.807442   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.807450   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.807458   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.807467   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.807478   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.808045   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:28:01.808307   29973 pod_ready.go:92] pod "kube-apiserver-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:01.808321   29973 pod_ready.go:81] duration metric: took 5.870808ms waiting for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:01.808329   29973 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:01.808374   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553062
	I1107 23:28:01.808384   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.808392   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.808398   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.810541   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:01.810559   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.810569   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.810577   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.810593   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.810601   29973 round_trippers.go:580]     Audit-Id: f30ef9f7-9b21-4b61-980b-14b60d6163fb
	I1107 23:28:01.810612   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.810624   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.810888   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553062","namespace":"kube-system","uid":"5a895945-b908-44ba-a1c8-93245f6a93f5","resourceVersion":"407","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.mirror":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.seen":"2023-11-07T23:26:57.103264314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1107 23:28:01.811380   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:28:01.811403   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.811414   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.811427   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.813420   29973 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:28:01.813435   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.813441   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.813447   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.813452   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.813460   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.813465   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.813473   29973 round_trippers.go:580]     Audit-Id: abf96cd1-5f7a-4223-a7d9-a519bb999639
	I1107 23:28:01.813675   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:28:01.814032   29973 pod_ready.go:92] pod "kube-controller-manager-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:01.814053   29973 pod_ready.go:81] duration metric: took 5.717155ms waiting for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:01.814064   29973 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:01.980469   29973 request.go:629] Waited for 166.312621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:28:01.980551   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:28:01.980556   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:01.980568   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:01.980574   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:01.983640   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:28:01.983663   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:01.983672   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:01.983681   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:01.983689   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:01.983698   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:01 GMT
	I1107 23:28:01.983706   29973 round_trippers.go:580]     Audit-Id: 4ddab3ff-4eb7-40d6-af59-04d27f53ecb5
	I1107 23:28:01.983715   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:01.983873   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-944rz","generateName":"kube-proxy-","namespace":"kube-system","uid":"db20b1cf-b422-4649-a6e1-4549c4c56f33","resourceVersion":"378","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1107 23:28:02.179547   29973 request.go:629] Waited for 195.266492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:28:02.179642   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:28:02.179650   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:02.179666   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:02.179684   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:02.182305   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:02.182327   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:02.182333   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:02 GMT
	I1107 23:28:02.182339   29973 round_trippers.go:580]     Audit-Id: a27c8295-b9f8-4cff-b8b4-88c41fbb0cda
	I1107 23:28:02.182344   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:02.182349   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:02.182354   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:02.182361   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:02.182515   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:28:02.182843   29973 pod_ready.go:92] pod "kube-proxy-944rz" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:02.182865   29973 pod_ready.go:81] duration metric: took 368.784938ms waiting for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:02.182878   29973 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:02.380324   29973 request.go:629] Waited for 197.381514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:28:02.380416   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:28:02.380428   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:02.380440   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:02.380448   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:02.383114   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:02.383134   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:02.383143   29973 round_trippers.go:580]     Audit-Id: 2cb748ed-6161-46c0-8c70-d7d991e71a3e
	I1107 23:28:02.383150   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:02.383158   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:02.383166   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:02.383179   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:02.383188   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:02 GMT
	I1107 23:28:02.383359   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rktlk","generateName":"kube-proxy-","namespace":"kube-system","uid":"92ea69ee-cd72-4594-a338-9837cc44e5a1","resourceVersion":"479","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1107 23:28:02.580192   29973 request.go:629] Waited for 196.350739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:28:02.580272   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:28:02.580279   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:02.580288   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:02.580306   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:02.583753   29973 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:28:02.583772   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:02.583778   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:02.583783   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:02.583788   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:02.583793   29973 round_trippers.go:580]     Content-Length: 3726
	I1107 23:28:02.583799   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:02 GMT
	I1107 23:28:02.583811   29973 round_trippers.go:580]     Audit-Id: 30edc7a2-0c00-4d8b-a654-52b34f193139
	I1107 23:28:02.583816   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:02.583882   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"496","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2702 chars]
	I1107 23:28:02.584108   29973 pod_ready.go:92] pod "kube-proxy-rktlk" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:02.584121   29973 pod_ready.go:81] duration metric: took 401.236537ms waiting for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:02.584129   29973 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:02.780570   29973 request.go:629] Waited for 196.384303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:28:02.780656   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:28:02.780668   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:02.780679   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:02.780705   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:02.783508   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:02.783525   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:02.783531   29973 round_trippers.go:580]     Audit-Id: 5503f3a2-57f5-46dc-98f1-62dff4c97361
	I1107 23:28:02.783537   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:02.783544   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:02.783549   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:02.783555   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:02.783560   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:02 GMT
	I1107 23:28:02.783693   29973 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"404","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1107 23:28:02.980400   29973 request.go:629] Waited for 196.359958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:28:02.980462   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:28:02.980467   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:02.980474   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:02.980480   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:02.983046   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:02.983062   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:02.983073   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:02.983078   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:02.983084   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:02 GMT
	I1107 23:28:02.983089   29973 round_trippers.go:580]     Audit-Id: fc14619f-4df3-43f1-ba70-e81a6bdd1882
	I1107 23:28:02.983094   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:02.983099   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:02.983331   29973 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1107 23:28:02.983638   29973 pod_ready.go:92] pod "kube-scheduler-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:28:02.983654   29973 pod_ready.go:81] duration metric: took 399.51462ms waiting for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:28:02.983663   29973 pod_ready.go:38] duration metric: took 1.200536126s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:28:02.983676   29973 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:28:02.983719   29973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:28:02.996640   29973 system_svc.go:56] duration metric: took 12.9583ms WaitForService to wait for kubelet.
	I1107 23:28:02.996665   29973 kubeadm.go:581] duration metric: took 11.251598584s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:28:02.996683   29973 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:28:03.180093   29973 request.go:629] Waited for 183.349675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1107 23:28:03.180159   29973 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1107 23:28:03.180165   29973 round_trippers.go:469] Request Headers:
	I1107 23:28:03.180173   29973 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:28:03.180180   29973 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:28:03.183006   29973 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:28:03.183022   29973 round_trippers.go:577] Response Headers:
	I1107 23:28:03.183031   29973 round_trippers.go:580]     Audit-Id: 15212c6f-f052-49e3-9412-0540f7babf21
	I1107 23:28:03.183038   29973 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:28:03.183046   29973 round_trippers.go:580]     Content-Type: application/json
	I1107 23:28:03.183053   29973 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:28:03.183062   29973 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:28:03.183079   29973 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:28:03 GMT
	I1107 23:28:03.183451   29973 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"389","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9646 chars]
	I1107 23:28:03.183936   29973 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:28:03.183958   29973 node_conditions.go:123] node cpu capacity is 2
	I1107 23:28:03.183966   29973 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:28:03.183970   29973 node_conditions.go:123] node cpu capacity is 2
	I1107 23:28:03.183979   29973 node_conditions.go:105] duration metric: took 187.287878ms to run NodePressure ...
	I1107 23:28:03.183989   29973 start.go:228] waiting for startup goroutines ...
	I1107 23:28:03.184024   29973 start.go:242] writing updated cluster config ...
	I1107 23:28:03.184275   29973 ssh_runner.go:195] Run: rm -f paused
	I1107 23:28:03.229776   29973 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1107 23:28:03.232700   29973 out.go:177] * Done! kubectl is now configured to use "multinode-553062" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-07 23:26:23 UTC, ends at Tue 2023-11-07 23:28:11 UTC. --
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.786012703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699399691785998726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9beb19b0-5c46-477f-8ae6-d3d0c2840933 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.786852158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2775fca2-2af4-4fa9-8e0f-a908562b2f1e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.786928932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2775fca2-2af4-4fa9-8e0f-a908562b2f1e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.787154810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13f477cf94f0d7f170ce1c2cb225b8b35e7538ee3f5f5b40b39bbea03ac33e08,PodSandboxId:20edef406bdb85ea0629d08003c2f1f07beef041208be8b6e1dd3ede3f7eb629,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699399687818948070,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-tvwc7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aba25d32-a9c1-4008-b112-3409cec0c411,},Annotations:map[string]string{io.kubernetes.container.hash: 27d64535,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f39e43b7002206257647b320cd7aa94531213c0de222fb03c1fc223e69373d,PodSandboxId:d2cd4eab2c6f1b1bb44552f568e323581cde2a9507042b3540cf76bae6c3c512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699399636849213740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6ggfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785c6064-d793-4959-8e34-28b4fc2549fc,},Annotations:map[string]string{io.kubernetes.container.hash: 128aa424,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50213a68a2f2238c8247e66cb66269cd09c8a1bbf30aef6802094b4fc3818371,PodSandboxId:f8265e4b5fc4d07fb28826f79097a0b4605c3bbdb411968d8001cc16de407f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699399636622550354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de46485a390dcd174aa2d0d0782a0932060db3598ad57198a1441cf1ffa1ad2,PodSandboxId:aa5aba28892e2976f3311bf5381ee6a7b83e1a67134722cec416a392aaa75e19,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699399633809336376,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9stvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a9981d59-dbff-456f-9024-2754c2a9d0c6,},Annotations:map[string]string{io.kubernetes.container.hash: 14e7cd4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796b2a513b6af2ed60da72a31487715096745c5bf0548d5c12b5d89d57168394,PodSandboxId:eaec7a5506bb776d4885e44d6b86575181f97c83ba7ee690a7de64d3bc859000,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699399631885956242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-944rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db20b1cf-b422-4649-a6e1-4549c4
c56f33,},Annotations:map[string]string{io.kubernetes.container.hash: 495a4f8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a3bd1e878e269f9593b5ecf66b5d7ba2a591a018ffb087a2e560c77805c4b0,PodSandboxId:2b85ca2dde0bec0a3cae85d495146e6133fc023d67618629a4de9ea6d68dc8a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699399609904006672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82562fbdca14daeb385ae6968954f46,},Annotations:map[string]string{io.kubernetes
.container.hash: 25bdecaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa089adde58b8290bc632a5d2ed346203e996f8ef633c945c2c7837188bf05a,PodSandboxId:2297f41e8294408bfc3e0532d79657fcae8ff86dbdca2871c2229d12604bd5aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699399609808604686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101b31a45aab34f5dc66aed5e9e7cce1,},Annotations:map[string]string{io.kubernetes.container.h
ash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96f0b0d050d53b36755bec0bf379fb2ed0c7ffd4cc24ad375f181cc9faac055,PodSandboxId:f73ad021fd60af1f3aac1b8c1a3c26c583f60a192f60f66e5864d01eba39a6d3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699399609656732450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6355e861fae0971467df802e2b4d8be6,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f60917d7ee76f8bc0991de243fd8a9da27aa228911b0518d31060209519367b,PodSandboxId:9c3d6e77b87a4cb98e687dd0d36f11a26c39c3cf5b343b3c7141768c81ecc800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699399609512619254,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf3161d745dce4ca9e35cf659a0b5ec9,},Annotations:map[string]string{io.kubernetes
.container.hash: 1a0265a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2775fca2-2af4-4fa9-8e0f-a908562b2f1e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.827890511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c1aaeb5c-4d4c-4af0-9a90-92be6316badf name=/runtime.v1.RuntimeService/Version
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.827984312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c1aaeb5c-4d4c-4af0-9a90-92be6316badf name=/runtime.v1.RuntimeService/Version
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.830917846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f75e50b8-bcaf-4b99-b76b-1a25da60976b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.831327098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699399691831313685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f75e50b8-bcaf-4b99-b76b-1a25da60976b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.832343002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2e369f7c-e940-4355-b1d1-080e0edc9495 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.832420717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2e369f7c-e940-4355-b1d1-080e0edc9495 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.832674984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13f477cf94f0d7f170ce1c2cb225b8b35e7538ee3f5f5b40b39bbea03ac33e08,PodSandboxId:20edef406bdb85ea0629d08003c2f1f07beef041208be8b6e1dd3ede3f7eb629,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699399687818948070,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-tvwc7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aba25d32-a9c1-4008-b112-3409cec0c411,},Annotations:map[string]string{io.kubernetes.container.hash: 27d64535,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f39e43b7002206257647b320cd7aa94531213c0de222fb03c1fc223e69373d,PodSandboxId:d2cd4eab2c6f1b1bb44552f568e323581cde2a9507042b3540cf76bae6c3c512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699399636849213740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6ggfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785c6064-d793-4959-8e34-28b4fc2549fc,},Annotations:map[string]string{io.kubernetes.container.hash: 128aa424,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50213a68a2f2238c8247e66cb66269cd09c8a1bbf30aef6802094b4fc3818371,PodSandboxId:f8265e4b5fc4d07fb28826f79097a0b4605c3bbdb411968d8001cc16de407f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699399636622550354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de46485a390dcd174aa2d0d0782a0932060db3598ad57198a1441cf1ffa1ad2,PodSandboxId:aa5aba28892e2976f3311bf5381ee6a7b83e1a67134722cec416a392aaa75e19,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699399633809336376,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9stvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a9981d59-dbff-456f-9024-2754c2a9d0c6,},Annotations:map[string]string{io.kubernetes.container.hash: 14e7cd4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796b2a513b6af2ed60da72a31487715096745c5bf0548d5c12b5d89d57168394,PodSandboxId:eaec7a5506bb776d4885e44d6b86575181f97c83ba7ee690a7de64d3bc859000,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699399631885956242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-944rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db20b1cf-b422-4649-a6e1-4549c4
c56f33,},Annotations:map[string]string{io.kubernetes.container.hash: 495a4f8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a3bd1e878e269f9593b5ecf66b5d7ba2a591a018ffb087a2e560c77805c4b0,PodSandboxId:2b85ca2dde0bec0a3cae85d495146e6133fc023d67618629a4de9ea6d68dc8a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699399609904006672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82562fbdca14daeb385ae6968954f46,},Annotations:map[string]string{io.kubernetes
.container.hash: 25bdecaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa089adde58b8290bc632a5d2ed346203e996f8ef633c945c2c7837188bf05a,PodSandboxId:2297f41e8294408bfc3e0532d79657fcae8ff86dbdca2871c2229d12604bd5aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699399609808604686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101b31a45aab34f5dc66aed5e9e7cce1,},Annotations:map[string]string{io.kubernetes.container.h
ash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96f0b0d050d53b36755bec0bf379fb2ed0c7ffd4cc24ad375f181cc9faac055,PodSandboxId:f73ad021fd60af1f3aac1b8c1a3c26c583f60a192f60f66e5864d01eba39a6d3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699399609656732450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6355e861fae0971467df802e2b4d8be6,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f60917d7ee76f8bc0991de243fd8a9da27aa228911b0518d31060209519367b,PodSandboxId:9c3d6e77b87a4cb98e687dd0d36f11a26c39c3cf5b343b3c7141768c81ecc800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699399609512619254,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf3161d745dce4ca9e35cf659a0b5ec9,},Annotations:map[string]string{io.kubernetes
.container.hash: 1a0265a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2e369f7c-e940-4355-b1d1-080e0edc9495 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.875880735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=94d7f804-60d5-43ae-b4cd-4558fbbf4b40 name=/runtime.v1.RuntimeService/Version
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.875965871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=94d7f804-60d5-43ae-b4cd-4558fbbf4b40 name=/runtime.v1.RuntimeService/Version
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.877942230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5fbbbf64-9fd0-4935-a83e-eb725c770242 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.878300286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699399691878280662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5fbbbf64-9fd0-4935-a83e-eb725c770242 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.879133243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0c2a4a57-4c17-4943-8fe8-ab5a65e58780 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.879213789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0c2a4a57-4c17-4943-8fe8-ab5a65e58780 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.879400455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13f477cf94f0d7f170ce1c2cb225b8b35e7538ee3f5f5b40b39bbea03ac33e08,PodSandboxId:20edef406bdb85ea0629d08003c2f1f07beef041208be8b6e1dd3ede3f7eb629,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699399687818948070,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-tvwc7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aba25d32-a9c1-4008-b112-3409cec0c411,},Annotations:map[string]string{io.kubernetes.container.hash: 27d64535,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f39e43b7002206257647b320cd7aa94531213c0de222fb03c1fc223e69373d,PodSandboxId:d2cd4eab2c6f1b1bb44552f568e323581cde2a9507042b3540cf76bae6c3c512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699399636849213740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6ggfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785c6064-d793-4959-8e34-28b4fc2549fc,},Annotations:map[string]string{io.kubernetes.container.hash: 128aa424,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50213a68a2f2238c8247e66cb66269cd09c8a1bbf30aef6802094b4fc3818371,PodSandboxId:f8265e4b5fc4d07fb28826f79097a0b4605c3bbdb411968d8001cc16de407f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699399636622550354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de46485a390dcd174aa2d0d0782a0932060db3598ad57198a1441cf1ffa1ad2,PodSandboxId:aa5aba28892e2976f3311bf5381ee6a7b83e1a67134722cec416a392aaa75e19,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699399633809336376,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9stvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a9981d59-dbff-456f-9024-2754c2a9d0c6,},Annotations:map[string]string{io.kubernetes.container.hash: 14e7cd4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796b2a513b6af2ed60da72a31487715096745c5bf0548d5c12b5d89d57168394,PodSandboxId:eaec7a5506bb776d4885e44d6b86575181f97c83ba7ee690a7de64d3bc859000,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699399631885956242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-944rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db20b1cf-b422-4649-a6e1-4549c4
c56f33,},Annotations:map[string]string{io.kubernetes.container.hash: 495a4f8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a3bd1e878e269f9593b5ecf66b5d7ba2a591a018ffb087a2e560c77805c4b0,PodSandboxId:2b85ca2dde0bec0a3cae85d495146e6133fc023d67618629a4de9ea6d68dc8a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699399609904006672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82562fbdca14daeb385ae6968954f46,},Annotations:map[string]string{io.kubernetes
.container.hash: 25bdecaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa089adde58b8290bc632a5d2ed346203e996f8ef633c945c2c7837188bf05a,PodSandboxId:2297f41e8294408bfc3e0532d79657fcae8ff86dbdca2871c2229d12604bd5aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699399609808604686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101b31a45aab34f5dc66aed5e9e7cce1,},Annotations:map[string]string{io.kubernetes.container.h
ash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96f0b0d050d53b36755bec0bf379fb2ed0c7ffd4cc24ad375f181cc9faac055,PodSandboxId:f73ad021fd60af1f3aac1b8c1a3c26c583f60a192f60f66e5864d01eba39a6d3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699399609656732450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6355e861fae0971467df802e2b4d8be6,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f60917d7ee76f8bc0991de243fd8a9da27aa228911b0518d31060209519367b,PodSandboxId:9c3d6e77b87a4cb98e687dd0d36f11a26c39c3cf5b343b3c7141768c81ecc800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699399609512619254,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf3161d745dce4ca9e35cf659a0b5ec9,},Annotations:map[string]string{io.kubernetes
.container.hash: 1a0265a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0c2a4a57-4c17-4943-8fe8-ab5a65e58780 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.920662127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=42bb31b9-9741-464b-9979-5831ed6346cb name=/runtime.v1.RuntimeService/Version
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.920747916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=42bb31b9-9741-464b-9979-5831ed6346cb name=/runtime.v1.RuntimeService/Version
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.921915756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b25a8bc4-58da-40bd-9524-95a3b20374bd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.922266053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699399691922255099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b25a8bc4-58da-40bd-9524-95a3b20374bd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.923023507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d92cf8fb-ac65-4593-b957-fcb0594e895e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.923095930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d92cf8fb-ac65-4593-b957-fcb0594e895e name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:28:11 multinode-553062 crio[721]: time="2023-11-07 23:28:11.923282849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13f477cf94f0d7f170ce1c2cb225b8b35e7538ee3f5f5b40b39bbea03ac33e08,PodSandboxId:20edef406bdb85ea0629d08003c2f1f07beef041208be8b6e1dd3ede3f7eb629,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699399687818948070,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-tvwc7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aba25d32-a9c1-4008-b112-3409cec0c411,},Annotations:map[string]string{io.kubernetes.container.hash: 27d64535,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f39e43b7002206257647b320cd7aa94531213c0de222fb03c1fc223e69373d,PodSandboxId:d2cd4eab2c6f1b1bb44552f568e323581cde2a9507042b3540cf76bae6c3c512,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699399636849213740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6ggfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785c6064-d793-4959-8e34-28b4fc2549fc,},Annotations:map[string]string{io.kubernetes.container.hash: 128aa424,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50213a68a2f2238c8247e66cb66269cd09c8a1bbf30aef6802094b4fc3818371,PodSandboxId:f8265e4b5fc4d07fb28826f79097a0b4605c3bbdb411968d8001cc16de407f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699399636622550354,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de46485a390dcd174aa2d0d0782a0932060db3598ad57198a1441cf1ffa1ad2,PodSandboxId:aa5aba28892e2976f3311bf5381ee6a7b83e1a67134722cec416a392aaa75e19,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699399633809336376,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9stvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a9981d59-dbff-456f-9024-2754c2a9d0c6,},Annotations:map[string]string{io.kubernetes.container.hash: 14e7cd4d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796b2a513b6af2ed60da72a31487715096745c5bf0548d5c12b5d89d57168394,PodSandboxId:eaec7a5506bb776d4885e44d6b86575181f97c83ba7ee690a7de64d3bc859000,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699399631885956242,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-944rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db20b1cf-b422-4649-a6e1-4549c4
c56f33,},Annotations:map[string]string{io.kubernetes.container.hash: 495a4f8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a3bd1e878e269f9593b5ecf66b5d7ba2a591a018ffb087a2e560c77805c4b0,PodSandboxId:2b85ca2dde0bec0a3cae85d495146e6133fc023d67618629a4de9ea6d68dc8a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699399609904006672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82562fbdca14daeb385ae6968954f46,},Annotations:map[string]string{io.kubernetes
.container.hash: 25bdecaa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa089adde58b8290bc632a5d2ed346203e996f8ef633c945c2c7837188bf05a,PodSandboxId:2297f41e8294408bfc3e0532d79657fcae8ff86dbdca2871c2229d12604bd5aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699399609808604686,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101b31a45aab34f5dc66aed5e9e7cce1,},Annotations:map[string]string{io.kubernetes.container.h
ash: 1a68c1c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b96f0b0d050d53b36755bec0bf379fb2ed0c7ffd4cc24ad375f181cc9faac055,PodSandboxId:f73ad021fd60af1f3aac1b8c1a3c26c583f60a192f60f66e5864d01eba39a6d3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699399609656732450,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6355e861fae0971467df802e2b4d8be6,},Annotations:map[string]string{i
o.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f60917d7ee76f8bc0991de243fd8a9da27aa228911b0518d31060209519367b,PodSandboxId:9c3d6e77b87a4cb98e687dd0d36f11a26c39c3cf5b343b3c7141768c81ecc800,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699399609512619254,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf3161d745dce4ca9e35cf659a0b5ec9,},Annotations:map[string]string{io.kubernetes
.container.hash: 1a0265a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d92cf8fb-ac65-4593-b957-fcb0594e895e name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	13f477cf94f0d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   20edef406bdb8       busybox-5bc68d56bd-tvwc7
	14f39e43b7002       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      55 seconds ago       Running             coredns                   0                   d2cd4eab2c6f1       coredns-5dd5756b68-6ggfr
	50213a68a2f22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      55 seconds ago       Running             storage-provisioner       0                   f8265e4b5fc4d       storage-provisioner
	5de46485a390d       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      58 seconds ago       Running             kindnet-cni               0                   aa5aba28892e2       kindnet-9stvx
	796b2a513b6af       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      About a minute ago   Running             kube-proxy                0                   eaec7a5506bb7       kube-proxy-944rz
	f4a3bd1e878e2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   2b85ca2dde0be       etcd-multinode-553062
	4aa089adde58b       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      About a minute ago   Running             kube-scheduler            0                   2297f41e82944       kube-scheduler-multinode-553062
	b96f0b0d050d5       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      About a minute ago   Running             kube-controller-manager   0                   f73ad021fd60a       kube-controller-manager-multinode-553062
	2f60917d7ee76       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      About a minute ago   Running             kube-apiserver            0                   9c3d6e77b87a4       kube-apiserver-multinode-553062
	
	* 
	* ==> coredns [14f39e43b7002206257647b320cd7aa94531213c0de222fb03c1fc223e69373d] <==
	* [INFO] 10.244.0.3:50856 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179348s
	[INFO] 10.244.1.2:60211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141874s
	[INFO] 10.244.1.2:45072 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002296158s
	[INFO] 10.244.1.2:57248 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230738s
	[INFO] 10.244.1.2:51917 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115873s
	[INFO] 10.244.1.2:41937 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001302246s
	[INFO] 10.244.1.2:42458 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081405s
	[INFO] 10.244.1.2:50097 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088611s
	[INFO] 10.244.1.2:56905 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076232s
	[INFO] 10.244.0.3:58970 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149s
	[INFO] 10.244.0.3:45315 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010539s
	[INFO] 10.244.0.3:46664 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000313909s
	[INFO] 10.244.0.3:39369 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000345865s
	[INFO] 10.244.1.2:42305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125425s
	[INFO] 10.244.1.2:60511 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169196s
	[INFO] 10.244.1.2:42015 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112411s
	[INFO] 10.244.1.2:39122 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134706s
	[INFO] 10.244.0.3:32941 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103053s
	[INFO] 10.244.0.3:53411 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147583s
	[INFO] 10.244.0.3:60516 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142172s
	[INFO] 10.244.0.3:35513 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102075s
	[INFO] 10.244.1.2:43481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217098s
	[INFO] 10.244.1.2:36458 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001603s
	[INFO] 10.244.1.2:60784 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079822s
	[INFO] 10.244.1.2:51532 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100741s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-553062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=multinode-553062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_26_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:26:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-553062
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:28:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:27:15 +0000   Tue, 07 Nov 2023 23:26:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:27:15 +0000   Tue, 07 Nov 2023 23:26:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:27:15 +0000   Tue, 07 Nov 2023 23:26:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:27:15 +0000   Tue, 07 Nov 2023 23:27:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    multinode-553062
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 506f0e1682cc46079bf3cb06bd687e61
	  System UUID:                506f0e16-82cc-4607-9bf3-cb06bd687e61
	  Boot ID:                    51843c37-ae5e-49f9-9671-29e8908f2039
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-tvwc7                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5dd5756b68-6ggfr                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-multinode-553062                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kindnet-9stvx                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      62s
	  kube-system                 kube-apiserver-multinode-553062             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-multinode-553062    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-944rz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-multinode-553062             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 75s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s   kubelet          Node multinode-553062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s   kubelet          Node multinode-553062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s   kubelet          Node multinode-553062 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s   node-controller  Node multinode-553062 event: Registered Node multinode-553062 in Controller
	  Normal  NodeReady                57s   kubelet          Node multinode-553062 status is now: NodeReady
	
	
	Name:               multinode-553062-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553062-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:27:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-553062-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:28:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:28:01 +0000   Tue, 07 Nov 2023 23:27:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:28:01 +0000   Tue, 07 Nov 2023 23:27:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:28:01 +0000   Tue, 07 Nov 2023 23:27:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:28:01 +0000   Tue, 07 Nov 2023 23:28:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    multinode-553062-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c575b648ec8840ffb6a9a6591e7501d2
	  System UUID:                c575b648-ec88-40ff-b6a9-a6591e7501d2
	  Boot ID:                    e163bf6e-df20-4938-8ce1-efd885f93d4c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-z67r2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-4v85d               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22s
	  kube-system                 kube-proxy-rktlk            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientMemory  22s (x5 over 23s)  kubelet          Node multinode-553062-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x5 over 23s)  kubelet          Node multinode-553062-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x5 over 23s)  kubelet          Node multinode-553062-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node multinode-553062-m02 event: Registered Node multinode-553062-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-553062-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Nov 7 23:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.324221] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.437850] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151646] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.025715] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.961909] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.111977] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.140884] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.099828] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.195725] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +9.569164] systemd-fstab-generator[930]: Ignoring "noauto" for root device
	[  +9.244546] systemd-fstab-generator[1265]: Ignoring "noauto" for root device
	[Nov 7 23:27] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [f4a3bd1e878e269f9593b5ecf66b5d7ba2a591a018ffb087a2e560c77805c4b0] <==
	* {"level":"info","ts":"2023-11-07T23:26:51.5516Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2023-11-07T23:26:51.551759Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2023-11-07T23:26:51.552659Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-07T23:26:51.55259Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b19954eb16571c64","initial-advertise-peer-urls":["https://192.168.39.246:2380"],"listen-peer-urls":["https://192.168.39.246:2380"],"advertise-client-urls":["https://192.168.39.246:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.246:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-07T23:26:51.90858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-07T23:26:51.908683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-07T23:26:51.908728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 received MsgPreVoteResp from b19954eb16571c64 at term 1"}
	{"level":"info","ts":"2023-11-07T23:26:51.908758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 became candidate at term 2"}
	{"level":"info","ts":"2023-11-07T23:26:51.908781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 received MsgVoteResp from b19954eb16571c64 at term 2"}
	{"level":"info","ts":"2023-11-07T23:26:51.908807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 became leader at term 2"}
	{"level":"info","ts":"2023-11-07T23:26:51.908833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b19954eb16571c64 elected leader b19954eb16571c64 at term 2"}
	{"level":"info","ts":"2023-11-07T23:26:51.912917Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b19954eb16571c64","local-member-attributes":"{Name:multinode-553062 ClientURLs:[https://192.168.39.246:2379]}","request-path":"/0/members/b19954eb16571c64/attributes","cluster-id":"7954d586cad9e091","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-07T23:26:51.912997Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:26:51.915702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-07T23:26:51.915823Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:26:51.916307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:26:51.91724Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.246:2379"}
	{"level":"info","ts":"2023-11-07T23:26:51.91738Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7954d586cad9e091","local-member-id":"b19954eb16571c64","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:26:51.917542Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:26:51.917589Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:26:51.930503Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-07T23:26:51.930556Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-11-07T23:27:49.750316Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.623191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-07T23:27:49.750494Z","caller":"traceutil/trace.go:171","msg":"trace[1228763356] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:447; }","duration":"160.849613ms","start":"2023-11-07T23:27:49.589569Z","end":"2023-11-07T23:27:49.750419Z","steps":["trace[1228763356] 'range keys from in-memory index tree'  (duration: 160.560545ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:27:53.234759Z","caller":"traceutil/trace.go:171","msg":"trace[1696659784] transaction","detail":"{read_only:false; response_revision:470; number_of_response:1; }","duration":"141.326551ms","start":"2023-11-07T23:27:53.093401Z","end":"2023-11-07T23:27:53.234727Z","steps":["trace[1696659784] 'process raft request'  (duration: 141.113733ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:28:12 up 1 min,  0 users,  load average: 0.78, 0.39, 0.15
	Linux multinode-553062 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [5de46485a390dcd174aa2d0d0782a0932060db3598ad57198a1441cf1ffa1ad2] <==
	* I1107 23:27:14.663488       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1107 23:27:14.663644       1 main.go:107] hostIP = 192.168.39.246
	podIP = 192.168.39.246
	I1107 23:27:14.664006       1 main.go:116] setting mtu 1500 for CNI 
	I1107 23:27:14.664045       1 main.go:146] kindnetd IP family: "ipv4"
	I1107 23:27:14.664080       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1107 23:27:15.261131       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:27:15.261232       1 main.go:227] handling current node
	I1107 23:27:25.278987       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:27:25.279044       1 main.go:227] handling current node
	I1107 23:27:35.283333       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:27:35.283396       1 main.go:227] handling current node
	I1107 23:27:45.288928       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:27:45.288980       1 main.go:227] handling current node
	I1107 23:27:55.293882       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:27:55.294177       1 main.go:227] handling current node
	I1107 23:27:55.294214       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I1107 23:27:55.294235       1 main.go:250] Node multinode-553062-m02 has CIDR [10.244.1.0/24] 
	I1107 23:27:55.294609       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.137 Flags: [] Table: 0} 
	I1107 23:28:05.303078       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:28:05.303161       1 main.go:227] handling current node
	I1107 23:28:05.303184       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I1107 23:28:05.303201       1 main.go:250] Node multinode-553062-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [2f60917d7ee76f8bc0991de243fd8a9da27aa228911b0518d31060209519367b] <==
	* I1107 23:26:53.918067       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 23:26:53.918991       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1107 23:26:53.927547       1 controller.go:624] quota admission added evaluator for: namespaces
	I1107 23:26:53.931570       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1107 23:26:53.952827       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1107 23:26:53.952927       1 aggregator.go:166] initial CRD sync complete...
	I1107 23:26:53.952961       1 autoregister_controller.go:141] Starting autoregister controller
	I1107 23:26:53.952968       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1107 23:26:53.952974       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:26:53.979615       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:26:54.820235       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1107 23:26:54.827171       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1107 23:26:54.827211       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 23:26:55.441298       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:26:55.496174       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 23:26:55.650711       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1107 23:26:55.658112       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I1107 23:26:55.659159       1 controller.go:624] quota admission added evaluator for: endpoints
	I1107 23:26:55.663765       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 23:26:55.952573       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1107 23:26:57.021552       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1107 23:26:57.049931       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1107 23:26:57.063778       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1107 23:27:10.352680       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1107 23:27:10.503321       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [b96f0b0d050d53b36755bec0bf379fb2ed0c7ffd4cc24ad375f181cc9faac055] <==
	* I1107 23:27:15.857154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.758µs"
	I1107 23:27:15.911494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.681µs"
	I1107 23:27:17.322012       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.021µs"
	I1107 23:27:17.357547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.973992ms"
	I1107 23:27:17.357638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.421µs"
	I1107 23:27:19.551656       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1107 23:27:50.954139       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-553062-m02\" does not exist"
	I1107 23:27:50.979784       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-553062-m02" podCIDRs=["10.244.1.0/24"]
	I1107 23:27:50.986316       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rktlk"
	I1107 23:27:50.986393       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4v85d"
	I1107 23:27:54.557337       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-553062-m02"
	I1107 23:27:54.557888       1 event.go:307] "Event occurred" object="multinode-553062-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-553062-m02 event: Registered Node multinode-553062-m02 in Controller"
	I1107 23:28:01.331637       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553062-m02"
	I1107 23:28:03.954905       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1107 23:28:03.976831       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-z67r2"
	I1107 23:28:03.996249       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-tvwc7"
	I1107 23:28:04.004244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.167159ms"
	I1107 23:28:04.041868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.551926ms"
	I1107 23:28:04.058241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.264879ms"
	I1107 23:28:04.058554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="108.279µs"
	I1107 23:28:04.571680       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-z67r2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-z67r2"
	I1107 23:28:07.721350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.096299ms"
	I1107 23:28:07.721972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.743µs"
	I1107 23:28:08.490871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.221617ms"
	I1107 23:28:08.491006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.124µs"
	
	* 
	* ==> kube-proxy [796b2a513b6af2ed60da72a31487715096745c5bf0548d5c12b5d89d57168394] <==
	* I1107 23:27:12.065353       1 server_others.go:69] "Using iptables proxy"
	I1107 23:27:12.078792       1 node.go:141] Successfully retrieved node IP: 192.168.39.246
	I1107 23:27:12.120154       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1107 23:27:12.120223       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1107 23:27:12.122957       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:27:12.123021       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:27:12.123397       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:27:12.123500       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:27:12.124744       1 config.go:188] "Starting service config controller"
	I1107 23:27:12.124798       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:27:12.124827       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:27:12.124843       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:27:12.125814       1 config.go:315] "Starting node config controller"
	I1107 23:27:12.125856       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:27:12.225263       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:27:12.225237       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1107 23:27:12.225977       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4aa089adde58b8290bc632a5d2ed346203e996f8ef633c945c2c7837188bf05a] <==
	* W1107 23:26:53.955227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:26:53.955259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:26:53.955530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:26:53.955643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1107 23:26:53.955887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:26:53.956019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 23:26:53.956235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:26:53.956284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1107 23:26:53.956330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:26:53.956339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1107 23:26:53.958633       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:26:53.958708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1107 23:26:54.890941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:26:54.890991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1107 23:26:54.928949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 23:26:54.929004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1107 23:26:55.092184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:26:55.092235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1107 23:26:55.119064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:26:55.119113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:26:55.227344       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:26:55.227395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1107 23:26:55.314216       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1107 23:26:55.314270       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1107 23:26:57.443170       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-07 23:26:23 UTC, ends at Tue 2023-11-07 23:28:12 UTC. --
	Nov 07 23:27:10 multinode-553062 kubelet[1272]: I1107 23:27:10.612982    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a9981d59-dbff-456f-9024-2754c2a9d0c6-cni-cfg\") pod \"kindnet-9stvx\" (UID: \"a9981d59-dbff-456f-9024-2754c2a9d0c6\") " pod="kube-system/kindnet-9stvx"
	Nov 07 23:27:10 multinode-553062 kubelet[1272]: I1107 23:27:10.613002    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db20b1cf-b422-4649-a6e1-4549c4c56f33-kube-proxy\") pod \"kube-proxy-944rz\" (UID: \"db20b1cf-b422-4649-a6e1-4549c4c56f33\") " pod="kube-system/kube-proxy-944rz"
	Nov 07 23:27:10 multinode-553062 kubelet[1272]: I1107 23:27:10.613020    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db20b1cf-b422-4649-a6e1-4549c4c56f33-lib-modules\") pod \"kube-proxy-944rz\" (UID: \"db20b1cf-b422-4649-a6e1-4549c4c56f33\") " pod="kube-system/kube-proxy-944rz"
	Nov 07 23:27:10 multinode-553062 kubelet[1272]: I1107 23:27:10.613042    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9981d59-dbff-456f-9024-2754c2a9d0c6-xtables-lock\") pod \"kindnet-9stvx\" (UID: \"a9981d59-dbff-456f-9024-2754c2a9d0c6\") " pod="kube-system/kindnet-9stvx"
	Nov 07 23:27:10 multinode-553062 kubelet[1272]: I1107 23:27:10.613059    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db20b1cf-b422-4649-a6e1-4549c4c56f33-xtables-lock\") pod \"kube-proxy-944rz\" (UID: \"db20b1cf-b422-4649-a6e1-4549c4c56f33\") " pod="kube-system/kube-proxy-944rz"
	Nov 07 23:27:10 multinode-553062 kubelet[1272]: I1107 23:27:10.613076    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9981d59-dbff-456f-9024-2754c2a9d0c6-lib-modules\") pod \"kindnet-9stvx\" (UID: \"a9981d59-dbff-456f-9024-2754c2a9d0c6\") " pod="kube-system/kindnet-9stvx"
	Nov 07 23:27:10 multinode-553062 kubelet[1272]: I1107 23:27:10.613121    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94c4j\" (UniqueName: \"kubernetes.io/projected/a9981d59-dbff-456f-9024-2754c2a9d0c6-kube-api-access-94c4j\") pod \"kindnet-9stvx\" (UID: \"a9981d59-dbff-456f-9024-2754c2a9d0c6\") " pod="kube-system/kindnet-9stvx"
	Nov 07 23:27:15 multinode-553062 kubelet[1272]: I1107 23:27:15.291820    1272 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-944rz" podStartSLOduration=5.2917751840000005 podCreationTimestamp="2023-11-07 23:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:27:12.27455013 +0000 UTC m=+15.279875362" watchObservedRunningTime="2023-11-07 23:27:15.291775184 +0000 UTC m=+18.297100422"
	Nov 07 23:27:15 multinode-553062 kubelet[1272]: I1107 23:27:15.291919    1272 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9stvx" podStartSLOduration=5.29190559 podCreationTimestamp="2023-11-07 23:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:27:15.290790722 +0000 UTC m=+18.296115954" watchObservedRunningTime="2023-11-07 23:27:15.29190559 +0000 UTC m=+18.297230837"
	Nov 07 23:27:15 multinode-553062 kubelet[1272]: I1107 23:27:15.817409    1272 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 07 23:27:15 multinode-553062 kubelet[1272]: I1107 23:27:15.854044    1272 topology_manager.go:215] "Topology Admit Handler" podUID="785c6064-d793-4959-8e34-28b4fc2549fc" podNamespace="kube-system" podName="coredns-5dd5756b68-6ggfr"
	Nov 07 23:27:15 multinode-553062 kubelet[1272]: I1107 23:27:15.876027    1272 topology_manager.go:215] "Topology Admit Handler" podUID="85179396-d02a-404a-a93e-e10db8c673b6" podNamespace="kube-system" podName="storage-provisioner"
	Nov 07 23:27:15 multinode-553062 kubelet[1272]: I1107 23:27:15.953324    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/785c6064-d793-4959-8e34-28b4fc2549fc-config-volume\") pod \"coredns-5dd5756b68-6ggfr\" (UID: \"785c6064-d793-4959-8e34-28b4fc2549fc\") " pod="kube-system/coredns-5dd5756b68-6ggfr"
	Nov 07 23:27:15 multinode-553062 kubelet[1272]: I1107 23:27:15.953369    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/85179396-d02a-404a-a93e-e10db8c673b6-tmp\") pod \"storage-provisioner\" (UID: \"85179396-d02a-404a-a93e-e10db8c673b6\") " pod="kube-system/storage-provisioner"
	Nov 07 23:27:15 multinode-553062 kubelet[1272]: I1107 23:27:15.953391    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl9ds\" (UniqueName: \"kubernetes.io/projected/85179396-d02a-404a-a93e-e10db8c673b6-kube-api-access-gl9ds\") pod \"storage-provisioner\" (UID: \"85179396-d02a-404a-a93e-e10db8c673b6\") " pod="kube-system/storage-provisioner"
	Nov 07 23:27:15 multinode-553062 kubelet[1272]: I1107 23:27:15.953412    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnbts\" (UniqueName: \"kubernetes.io/projected/785c6064-d793-4959-8e34-28b4fc2549fc-kube-api-access-cnbts\") pod \"coredns-5dd5756b68-6ggfr\" (UID: \"785c6064-d793-4959-8e34-28b4fc2549fc\") " pod="kube-system/coredns-5dd5756b68-6ggfr"
	Nov 07 23:27:17 multinode-553062 kubelet[1272]: I1107 23:27:17.319765    1272 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6ggfr" podStartSLOduration=7.319677482 podCreationTimestamp="2023-11-07 23:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:27:17.317717538 +0000 UTC m=+20.323042783" watchObservedRunningTime="2023-11-07 23:27:17.319677482 +0000 UTC m=+20.325002727"
	Nov 07 23:27:17 multinode-553062 kubelet[1272]: I1107 23:27:17.380374    1272 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.380337527 podCreationTimestamp="2023-11-07 23:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:27:17.379919273 +0000 UTC m=+20.385244506" watchObservedRunningTime="2023-11-07 23:27:17.380337527 +0000 UTC m=+20.385663075"
	Nov 07 23:27:57 multinode-553062 kubelet[1272]: E1107 23:27:57.212065    1272 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 07 23:27:57 multinode-553062 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 07 23:27:57 multinode-553062 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 07 23:27:57 multinode-553062 kubelet[1272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 07 23:28:04 multinode-553062 kubelet[1272]: I1107 23:28:04.007705    1272 topology_manager.go:215] "Topology Admit Handler" podUID="aba25d32-a9c1-4008-b112-3409cec0c411" podNamespace="default" podName="busybox-5bc68d56bd-tvwc7"
	Nov 07 23:28:04 multinode-553062 kubelet[1272]: I1107 23:28:04.161610    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9mb\" (UniqueName: \"kubernetes.io/projected/aba25d32-a9c1-4008-b112-3409cec0c411-kube-api-access-jm9mb\") pod \"busybox-5bc68d56bd-tvwc7\" (UID: \"aba25d32-a9c1-4008-b112-3409cec0c411\") " pod="default/busybox-5bc68d56bd-tvwc7"
	Nov 07 23:28:08 multinode-553062 kubelet[1272]: I1107 23:28:08.480115    1272 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-tvwc7" podStartSLOduration=2.599606712 podCreationTimestamp="2023-11-07 23:28:03 +0000 UTC" firstStartedPulling="2023-11-07 23:28:04.916619558 +0000 UTC m=+67.921944785" lastFinishedPulling="2023-11-07 23:28:07.797054547 +0000 UTC m=+70.802379773" observedRunningTime="2023-11-07 23:28:08.47966863 +0000 UTC m=+71.484993877" watchObservedRunningTime="2023-11-07 23:28:08.4800417 +0000 UTC m=+71.485366925"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-553062 -n multinode-553062
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-553062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (682.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-553062
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-553062
E1107 23:30:38.956643   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:30:42.436171   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-553062: exit status 82 (2m1.293231401s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-553062"  ...
	* Stopping node "multinode-553062"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-553062" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553062 --wait=true -v=8 --alsologtostderr
E1107 23:32:02.002959   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:33:53.872143   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:35:38.956322   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:35:42.434529   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:37:05.483767   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:38:53.871982   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:40:16.918013   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:40:38.956734   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:40:42.434153   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553062 --wait=true -v=8 --alsologtostderr: (9m18.077214475s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-553062
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-553062 -n multinode-553062
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-553062 logs -n 25: (1.566701009s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-553062 cp multinode-553062-m02:/home/docker/cp-test.txt                       | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3094019046/001/cp-test_multinode-553062-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-553062 cp multinode-553062-m02:/home/docker/cp-test.txt                       | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062:/home/docker/cp-test_multinode-553062-m02_multinode-553062.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n multinode-553062 sudo cat                                       | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | /home/docker/cp-test_multinode-553062-m02_multinode-553062.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-553062 cp multinode-553062-m02:/home/docker/cp-test.txt                       | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m03:/home/docker/cp-test_multinode-553062-m02_multinode-553062-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n multinode-553062-m03 sudo cat                                   | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | /home/docker/cp-test_multinode-553062-m02_multinode-553062-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-553062 cp testdata/cp-test.txt                                                | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-553062 cp multinode-553062-m03:/home/docker/cp-test.txt                       | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3094019046/001/cp-test_multinode-553062-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-553062 cp multinode-553062-m03:/home/docker/cp-test.txt                       | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062:/home/docker/cp-test_multinode-553062-m03_multinode-553062.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n multinode-553062 sudo cat                                       | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | /home/docker/cp-test_multinode-553062-m03_multinode-553062.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-553062 cp multinode-553062-m03:/home/docker/cp-test.txt                       | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m02:/home/docker/cp-test_multinode-553062-m03_multinode-553062-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n multinode-553062-m02 sudo cat                                   | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | /home/docker/cp-test_multinode-553062-m03_multinode-553062-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-553062 node stop m03                                                          | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	| node    | multinode-553062 node start                                                             | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-553062                                                                | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC |                     |
	| stop    | -p multinode-553062                                                                     | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC |                     |
	| start   | -p multinode-553062                                                                     | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:31 UTC | 07 Nov 23 23:41 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-553062                                                                | multinode-553062 | jenkins | v1.32.0 | 07 Nov 23 23:41 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:31:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:31:42.936902   33391 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:31:42.937035   33391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:31:42.937044   33391 out.go:309] Setting ErrFile to fd 2...
	I1107 23:31:42.937049   33391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:31:42.937222   33391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1107 23:31:42.937751   33391 out.go:303] Setting JSON to false
	I1107 23:31:42.938678   33391 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4452,"bootTime":1699395451,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:31:42.938744   33391 start.go:138] virtualization: kvm guest
	I1107 23:31:42.941154   33391 out.go:177] * [multinode-553062] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:31:42.943230   33391 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:31:42.943250   33391 notify.go:220] Checking for updates...
	I1107 23:31:42.945644   33391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:31:42.946907   33391 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:31:42.948198   33391 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:31:42.949542   33391 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:31:42.950886   33391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:31:42.952595   33391 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:31:42.952688   33391 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:31:42.953130   33391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:31:42.953179   33391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:31:42.967267   33391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46377
	I1107 23:31:42.967617   33391 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:31:42.968096   33391 main.go:141] libmachine: Using API Version  1
	I1107 23:31:42.968120   33391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:31:42.968415   33391 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:31:42.968581   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:31:43.001842   33391 out.go:177] * Using the kvm2 driver based on existing profile
	I1107 23:31:43.003328   33391 start.go:298] selected driver: kvm2
	I1107 23:31:43.003340   33391 start.go:902] validating driver "kvm2" against &{Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-55306
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.201 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:31:43.003468   33391 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:31:43.003782   33391 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:31:43.003853   33391 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:31:43.017612   33391 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:31:43.018288   33391 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:31:43.018320   33391 cni.go:84] Creating CNI manager for ""
	I1107 23:31:43.018328   33391 cni.go:136] 3 nodes found, recommending kindnet
	I1107 23:31:43.018340   33391 start_flags.go:323] config:
	{Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.201 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:fal
se logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:31:43.018550   33391 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:31:43.020473   33391 out.go:177] * Starting control plane node multinode-553062 in cluster multinode-553062
	I1107 23:31:43.022093   33391 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:31:43.022129   33391 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:31:43.022144   33391 cache.go:56] Caching tarball of preloaded images
	I1107 23:31:43.022227   33391 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:31:43.022238   33391 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:31:43.022348   33391 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:31:43.022553   33391 start.go:365] acquiring machines lock for multinode-553062: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:31:43.022593   33391 start.go:369] acquired machines lock for "multinode-553062" in 21.925µs
	I1107 23:31:43.022615   33391 start.go:96] Skipping create...Using existing machine configuration
	I1107 23:31:43.022620   33391 fix.go:54] fixHost starting: 
	I1107 23:31:43.022852   33391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:31:43.022880   33391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:31:43.036437   33391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I1107 23:31:43.036922   33391 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:31:43.037403   33391 main.go:141] libmachine: Using API Version  1
	I1107 23:31:43.037431   33391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:31:43.037766   33391 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:31:43.037974   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:31:43.038120   33391 main.go:141] libmachine: (multinode-553062) Calling .GetState
	I1107 23:31:43.039567   33391 fix.go:102] recreateIfNeeded on multinode-553062: state=Running err=<nil>
	W1107 23:31:43.039587   33391 fix.go:128] unexpected machine state, will restart: <nil>
	I1107 23:31:43.041574   33391 out.go:177] * Updating the running kvm2 "multinode-553062" VM ...
	I1107 23:31:43.043056   33391 machine.go:88] provisioning docker machine ...
	I1107 23:31:43.043076   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:31:43.043286   33391 main.go:141] libmachine: (multinode-553062) Calling .GetMachineName
	I1107 23:31:43.043451   33391 buildroot.go:166] provisioning hostname "multinode-553062"
	I1107 23:31:43.043467   33391 main.go:141] libmachine: (multinode-553062) Calling .GetMachineName
	I1107 23:31:43.043619   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:31:43.046059   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:31:43.046537   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:31:43.046557   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:31:43.046679   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:31:43.046842   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:31:43.046965   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:31:43.047093   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:31:43.047216   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:31:43.047557   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:31:43.047570   33391 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553062 && echo "multinode-553062" | sudo tee /etc/hostname
	I1107 23:32:01.537152   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:07.617093   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:10.689052   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:16.769120   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:19.841058   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:25.921110   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:28.993063   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:35.073125   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:38.145065   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:44.225085   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:47.297084   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:53.377139   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:32:56.449019   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:02.529091   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:05.601117   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:11.681093   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:14.753007   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:20.833094   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:23.905065   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:29.985075   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:33.056997   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:39.137105   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:42.209125   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:48.293093   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:51.361051   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:33:57.441086   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:00.513087   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:06.593077   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:09.665081   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:15.745088   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:18.817063   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:24.897094   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:27.969116   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:34.049049   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:37.121097   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:43.201055   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:46.273068   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:52.353100   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:34:55.425049   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:01.505052   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:04.577123   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:10.657074   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:13.729128   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:19.809093   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:22.881061   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:28.961036   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:32.033088   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:38.113050   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:41.185058   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:47.265132   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:50.337053   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:56.417060   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:35:59.489121   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:36:05.569110   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:36:08.641117   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:36:14.721118   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:36:17.793074   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:36:23.873033   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:36:26.945051   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:36:33.025034   33391 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.246:22: connect: no route to host
	I1107 23:36:36.025992   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:36:36.026035   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:36:36.027950   33391 machine.go:91] provisioned docker machine in 4m52.984873526s
	I1107 23:36:36.027984   33391 fix.go:56] fixHost completed within 4m53.005364733s
	I1107 23:36:36.027992   33391 start.go:83] releasing machines lock for "multinode-553062", held for 4m53.005390321s
	W1107 23:36:36.028006   33391 start.go:691] error starting host: provision: host is not running
	W1107 23:36:36.028095   33391 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1107 23:36:36.028103   33391 start.go:706] Will try again in 5 seconds ...
	I1107 23:36:41.030179   33391 start.go:365] acquiring machines lock for multinode-553062: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:36:41.030290   33391 start.go:369] acquired machines lock for "multinode-553062" in 74.797µs
	I1107 23:36:41.030315   33391 start.go:96] Skipping create...Using existing machine configuration
	I1107 23:36:41.030319   33391 fix.go:54] fixHost starting: 
	I1107 23:36:41.030635   33391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:36:41.030660   33391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:36:41.044961   33391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I1107 23:36:41.045417   33391 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:36:41.045866   33391 main.go:141] libmachine: Using API Version  1
	I1107 23:36:41.045893   33391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:36:41.046284   33391 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:36:41.046487   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:36:41.046632   33391 main.go:141] libmachine: (multinode-553062) Calling .GetState
	I1107 23:36:41.048360   33391 fix.go:102] recreateIfNeeded on multinode-553062: state=Stopped err=<nil>
	I1107 23:36:41.048377   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	W1107 23:36:41.048537   33391 fix.go:128] unexpected machine state, will restart: <nil>
	I1107 23:36:41.050938   33391 out.go:177] * Restarting existing kvm2 VM for "multinode-553062" ...
	I1107 23:36:41.052557   33391 main.go:141] libmachine: (multinode-553062) Calling .Start
	I1107 23:36:41.052756   33391 main.go:141] libmachine: (multinode-553062) Ensuring networks are active...
	I1107 23:36:41.053487   33391 main.go:141] libmachine: (multinode-553062) Ensuring network default is active
	I1107 23:36:41.053858   33391 main.go:141] libmachine: (multinode-553062) Ensuring network mk-multinode-553062 is active
	I1107 23:36:41.054210   33391 main.go:141] libmachine: (multinode-553062) Getting domain xml...
	I1107 23:36:41.054921   33391 main.go:141] libmachine: (multinode-553062) Creating domain...
	I1107 23:36:42.282359   33391 main.go:141] libmachine: (multinode-553062) Waiting to get IP...
	I1107 23:36:42.283284   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:42.283767   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:42.283825   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:42.283734   34197 retry.go:31] will retry after 217.669976ms: waiting for machine to come up
	I1107 23:36:42.503277   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:42.503652   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:42.503684   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:42.503601   34197 retry.go:31] will retry after 237.952623ms: waiting for machine to come up
	I1107 23:36:42.742942   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:42.743377   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:42.743399   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:42.743332   34197 retry.go:31] will retry after 329.465603ms: waiting for machine to come up
	I1107 23:36:43.074782   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:43.075317   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:43.075371   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:43.075272   34197 retry.go:31] will retry after 488.373862ms: waiting for machine to come up
	I1107 23:36:43.564903   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:43.565379   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:43.565407   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:43.565340   34197 retry.go:31] will retry after 632.333493ms: waiting for machine to come up
	I1107 23:36:44.198780   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:44.199208   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:44.199236   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:44.199167   34197 retry.go:31] will retry after 688.610063ms: waiting for machine to come up
	I1107 23:36:44.889064   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:44.889616   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:44.889645   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:44.889567   34197 retry.go:31] will retry after 1.095694034s: waiting for machine to come up
	I1107 23:36:45.987030   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:45.987459   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:45.987490   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:45.987413   34197 retry.go:31] will retry after 1.188271729s: waiting for machine to come up
	I1107 23:36:47.176966   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:47.177477   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:47.177506   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:47.177433   34197 retry.go:31] will retry after 1.842074754s: waiting for machine to come up
	I1107 23:36:49.021790   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:49.022180   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:49.022216   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:49.022127   34197 retry.go:31] will retry after 1.711558183s: waiting for machine to come up
	I1107 23:36:50.735739   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:50.736200   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:50.736241   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:50.736127   34197 retry.go:31] will retry after 1.811554086s: waiting for machine to come up
	I1107 23:36:52.549396   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:52.549793   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:52.549827   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:52.549764   34197 retry.go:31] will retry after 3.264356724s: waiting for machine to come up
	I1107 23:36:55.818028   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:55.818512   33391 main.go:141] libmachine: (multinode-553062) DBG | unable to find current IP address of domain multinode-553062 in network mk-multinode-553062
	I1107 23:36:55.818536   33391 main.go:141] libmachine: (multinode-553062) DBG | I1107 23:36:55.818441   34197 retry.go:31] will retry after 3.250859911s: waiting for machine to come up
	I1107 23:36:59.073098   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.073589   33391 main.go:141] libmachine: (multinode-553062) Found IP for machine: 192.168.39.246
	I1107 23:36:59.073629   33391 main.go:141] libmachine: (multinode-553062) Reserving static IP address...
	I1107 23:36:59.073644   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has current primary IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.074031   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "multinode-553062", mac: "52:54:00:a6:51:99", ip: "192.168.39.246"} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:36:59.074062   33391 main.go:141] libmachine: (multinode-553062) Reserved static IP address: 192.168.39.246
	I1107 23:36:59.074080   33391 main.go:141] libmachine: (multinode-553062) DBG | skip adding static IP to network mk-multinode-553062 - found existing host DHCP lease matching {name: "multinode-553062", mac: "52:54:00:a6:51:99", ip: "192.168.39.246"}
	I1107 23:36:59.074092   33391 main.go:141] libmachine: (multinode-553062) Waiting for SSH to be available...
	I1107 23:36:59.074102   33391 main.go:141] libmachine: (multinode-553062) DBG | Getting to WaitForSSH function...
	I1107 23:36:59.075863   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.076165   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:36:59.076197   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.076280   33391 main.go:141] libmachine: (multinode-553062) DBG | Using SSH client type: external
	I1107 23:36:59.076330   33391 main.go:141] libmachine: (multinode-553062) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa (-rw-------)
	I1107 23:36:59.076373   33391 main.go:141] libmachine: (multinode-553062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1107 23:36:59.076394   33391 main.go:141] libmachine: (multinode-553062) DBG | About to run SSH command:
	I1107 23:36:59.076408   33391 main.go:141] libmachine: (multinode-553062) DBG | exit 0
	I1107 23:36:59.168331   33391 main.go:141] libmachine: (multinode-553062) DBG | SSH cmd err, output: <nil>: 
	I1107 23:36:59.168666   33391 main.go:141] libmachine: (multinode-553062) Calling .GetConfigRaw
	I1107 23:36:59.169315   33391 main.go:141] libmachine: (multinode-553062) Calling .GetIP
	I1107 23:36:59.171518   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.171914   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:36:59.171947   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.172166   33391 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:36:59.172336   33391 machine.go:88] provisioning docker machine ...
	I1107 23:36:59.172351   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:36:59.172555   33391 main.go:141] libmachine: (multinode-553062) Calling .GetMachineName
	I1107 23:36:59.172732   33391 buildroot.go:166] provisioning hostname "multinode-553062"
	I1107 23:36:59.172751   33391 main.go:141] libmachine: (multinode-553062) Calling .GetMachineName
	I1107 23:36:59.172893   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:36:59.175221   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.175585   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:36:59.175613   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.175708   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:36:59.175835   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:36:59.175958   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:36:59.176082   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:36:59.176250   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:36:59.176627   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:36:59.176641   33391 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553062 && echo "multinode-553062" | sudo tee /etc/hostname
	I1107 23:36:59.314833   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553062
	
	I1107 23:36:59.314861   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:36:59.317571   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.317923   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:36:59.317959   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.318073   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:36:59.318306   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:36:59.318486   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:36:59.318625   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:36:59.318775   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:36:59.319099   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:36:59.319139   33391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553062/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:36:59.453065   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:36:59.453110   33391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1107 23:36:59.453152   33391 buildroot.go:174] setting up certificates
	I1107 23:36:59.453167   33391 provision.go:83] configureAuth start
	I1107 23:36:59.453184   33391 main.go:141] libmachine: (multinode-553062) Calling .GetMachineName
	I1107 23:36:59.453470   33391 main.go:141] libmachine: (multinode-553062) Calling .GetIP
	I1107 23:36:59.456150   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.456524   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:36:59.456558   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.456688   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:36:59.458786   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.459107   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:36:59.459136   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.459265   33391 provision.go:138] copyHostCerts
	I1107 23:36:59.459295   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:36:59.459330   33391 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1107 23:36:59.459347   33391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:36:59.459409   33391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1107 23:36:59.459495   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:36:59.459517   33391 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1107 23:36:59.459527   33391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:36:59.459553   33391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1107 23:36:59.459607   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:36:59.459624   33391 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1107 23:36:59.459627   33391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:36:59.459651   33391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1107 23:36:59.459707   33391 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.multinode-553062 san=[192.168.39.246 192.168.39.246 localhost 127.0.0.1 minikube multinode-553062]
	I1107 23:36:59.662587   33391 provision.go:172] copyRemoteCerts
	I1107 23:36:59.662638   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:36:59.662659   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:36:59.665182   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.665488   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:36:59.665512   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.665680   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:36:59.665868   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:36:59.666044   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:36:59.666197   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:36:59.760501   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:36:59.760576   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:36:59.785438   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:36:59.785514   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1107 23:36:59.809586   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:36:59.809645   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:36:59.834687   33391 provision.go:86] duration metric: configureAuth took 381.506976ms
	I1107 23:36:59.834720   33391 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:36:59.834920   33391 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:36:59.834983   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:36:59.837388   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.837728   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:36:59.837748   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:36:59.837918   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:36:59.838089   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:36:59.838279   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:36:59.838390   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:36:59.838563   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:36:59.838905   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:36:59.838921   33391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:37:00.146937   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:37:00.146965   33391 machine.go:91] provisioned docker machine in 974.617491ms
	I1107 23:37:00.146974   33391 start.go:300] post-start starting for "multinode-553062" (driver="kvm2")
	I1107 23:37:00.146998   33391 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:37:00.147016   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:37:00.147319   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:37:00.147352   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:37:00.150132   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.150559   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:37:00.150599   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.150737   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:37:00.150904   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:37:00.151058   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:37:00.151219   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:37:00.242600   33391 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:37:00.246902   33391 command_runner.go:130] > NAME=Buildroot
	I1107 23:37:00.246924   33391 command_runner.go:130] > VERSION=2021.02.12-1-gb75713b-dirty
	I1107 23:37:00.246931   33391 command_runner.go:130] > ID=buildroot
	I1107 23:37:00.246940   33391 command_runner.go:130] > VERSION_ID=2021.02.12
	I1107 23:37:00.246948   33391 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1107 23:37:00.247026   33391 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:37:00.247049   33391 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1107 23:37:00.247131   33391 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1107 23:37:00.247225   33391 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1107 23:37:00.247239   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /etc/ssl/certs/168482.pem
	I1107 23:37:00.247351   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:37:00.255689   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:37:00.278039   33391 start.go:303] post-start completed in 131.051329ms
	I1107 23:37:00.278063   33391 fix.go:56] fixHost completed within 19.247742396s
	I1107 23:37:00.278081   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:37:00.280761   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.281180   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:37:00.281228   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.281405   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:37:00.281601   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:37:00.281768   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:37:00.281899   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:37:00.282064   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:37:00.282434   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I1107 23:37:00.282447   33391 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:37:00.409579   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699400220.360730054
	
	I1107 23:37:00.409604   33391 fix.go:206] guest clock: 1699400220.360730054
	I1107 23:37:00.409614   33391 fix.go:219] Guest: 2023-11-07 23:37:00.360730054 +0000 UTC Remote: 2023-11-07 23:37:00.27806784 +0000 UTC m=+317.389550500 (delta=82.662214ms)
	I1107 23:37:00.409638   33391 fix.go:190] guest clock delta is within tolerance: 82.662214ms
	I1107 23:37:00.409644   33391 start.go:83] releasing machines lock for "multinode-553062", held for 19.379340818s
	I1107 23:37:00.409667   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:37:00.409941   33391 main.go:141] libmachine: (multinode-553062) Calling .GetIP
	I1107 23:37:00.412584   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.412963   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:37:00.412989   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.413150   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:37:00.413607   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:37:00.413795   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:37:00.413858   33391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:37:00.413906   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:37:00.414011   33391 ssh_runner.go:195] Run: cat /version.json
	I1107 23:37:00.414058   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:37:00.416181   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.416462   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:37:00.416490   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.416645   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:37:00.416728   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.416792   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:37:00.416967   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:37:00.417123   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:37:00.417182   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:37:00.417247   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:00.417395   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:37:00.417556   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:37:00.417705   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:37:00.417847   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:37:00.505089   33391 command_runner.go:130] > {"iso_version": "v1.32.1", "kicbase_version": "v0.0.41-1698881667-17516", "minikube_version": "v1.32.0", "commit": "0b29983f4bdc1ad55180ee43e3f34cae6c24dee4"}
	I1107 23:37:00.505566   33391 ssh_runner.go:195] Run: systemctl --version
	I1107 23:37:00.525492   33391 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 23:37:00.525538   33391 command_runner.go:130] > systemd 247 (247)
	I1107 23:37:00.525551   33391 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1107 23:37:00.525597   33391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:37:00.672635   33391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:37:00.678660   33391 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1107 23:37:00.678708   33391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:37:00.678763   33391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:37:00.693132   33391 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1107 23:37:00.693186   33391 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:37:00.693197   33391 start.go:472] detecting cgroup driver to use...
	I1107 23:37:00.693258   33391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:37:00.706086   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:37:00.718273   33391 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:37:00.718328   33391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:37:00.731400   33391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:37:00.743716   33391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:37:00.845600   33391 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1107 23:37:00.845688   33391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:37:00.859551   33391 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1107 23:37:00.966781   33391 docker.go:219] disabling docker service ...
	I1107 23:37:00.966872   33391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:37:00.979436   33391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:37:00.991292   33391 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1107 23:37:00.991368   33391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:37:01.004681   33391 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1107 23:37:01.099886   33391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:37:01.112179   33391 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1107 23:37:01.112584   33391 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1107 23:37:01.206415   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:37:01.218721   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:37:01.235721   33391 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1107 23:37:01.236123   33391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:37:01.236180   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:37:01.245449   33391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:37:01.245503   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:37:01.254747   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:37:01.264171   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:37:01.273403   33391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:37:01.282585   33391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:37:01.290454   33391 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1107 23:37:01.290682   33391 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1107 23:37:01.290729   33391 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1107 23:37:01.302239   33391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:37:01.310301   33391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:37:01.413657   33391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:37:01.581830   33391 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:37:01.581900   33391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:37:01.587338   33391 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1107 23:37:01.587364   33391 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 23:37:01.587374   33391 command_runner.go:130] > Device: 16h/22d	Inode: 738         Links: 1
	I1107 23:37:01.587383   33391 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:37:01.587391   33391 command_runner.go:130] > Access: 2023-11-07 23:37:01.517905698 +0000
	I1107 23:37:01.587399   33391 command_runner.go:130] > Modify: 2023-11-07 23:37:01.517905698 +0000
	I1107 23:37:01.587408   33391 command_runner.go:130] > Change: 2023-11-07 23:37:01.517905698 +0000
	I1107 23:37:01.587415   33391 command_runner.go:130] >  Birth: -
	I1107 23:37:01.587771   33391 start.go:540] Will wait 60s for crictl version
	I1107 23:37:01.587846   33391 ssh_runner.go:195] Run: which crictl
	I1107 23:37:01.591536   33391 command_runner.go:130] > /usr/bin/crictl
	I1107 23:37:01.591858   33391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:37:01.629523   33391 command_runner.go:130] > Version:  0.1.0
	I1107 23:37:01.629546   33391 command_runner.go:130] > RuntimeName:  cri-o
	I1107 23:37:01.629551   33391 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1107 23:37:01.629559   33391 command_runner.go:130] > RuntimeApiVersion:  v1
	I1107 23:37:01.629655   33391 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1107 23:37:01.629744   33391 ssh_runner.go:195] Run: crio --version
	I1107 23:37:01.677123   33391 command_runner.go:130] > crio version 1.24.1
	I1107 23:37:01.677142   33391 command_runner.go:130] > Version:          1.24.1
	I1107 23:37:01.677149   33391 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:37:01.677154   33391 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:37:01.677159   33391 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:37:01.677164   33391 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:37:01.677168   33391 command_runner.go:130] > Compiler:         gc
	I1107 23:37:01.677172   33391 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:37:01.677179   33391 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:37:01.677195   33391 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:37:01.677203   33391 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:37:01.677210   33391 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:37:01.677338   33391 ssh_runner.go:195] Run: crio --version
	I1107 23:37:01.725217   33391 command_runner.go:130] > crio version 1.24.1
	I1107 23:37:01.725236   33391 command_runner.go:130] > Version:          1.24.1
	I1107 23:37:01.725242   33391 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:37:01.725246   33391 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:37:01.725253   33391 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:37:01.725259   33391 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:37:01.725263   33391 command_runner.go:130] > Compiler:         gc
	I1107 23:37:01.725273   33391 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:37:01.725283   33391 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:37:01.725294   33391 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:37:01.725302   33391 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:37:01.725309   33391 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:37:01.729504   33391 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1107 23:37:01.731089   33391 main.go:141] libmachine: (multinode-553062) Calling .GetIP
	I1107 23:37:01.733707   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:01.734059   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:37:01.734081   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:37:01.734316   33391 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:37:01.738589   33391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:37:01.750863   33391 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:37:01.750922   33391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:37:01.787715   33391 command_runner.go:130] > {
	I1107 23:37:01.787736   33391 command_runner.go:130] >   "images": [
	I1107 23:37:01.787740   33391 command_runner.go:130] >     {
	I1107 23:37:01.787752   33391 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1107 23:37:01.787757   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:01.787763   33391 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1107 23:37:01.787768   33391 command_runner.go:130] >       ],
	I1107 23:37:01.787772   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:01.787785   33391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1107 23:37:01.787803   33391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1107 23:37:01.787810   33391 command_runner.go:130] >       ],
	I1107 23:37:01.787818   33391 command_runner.go:130] >       "size": "750414",
	I1107 23:37:01.787826   33391 command_runner.go:130] >       "uid": {
	I1107 23:37:01.787831   33391 command_runner.go:130] >         "value": "65535"
	I1107 23:37:01.787835   33391 command_runner.go:130] >       },
	I1107 23:37:01.787840   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:01.787848   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:01.787852   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:01.787856   33391 command_runner.go:130] >     }
	I1107 23:37:01.787861   33391 command_runner.go:130] >   ]
	I1107 23:37:01.787867   33391 command_runner.go:130] > }
	I1107 23:37:01.788060   33391 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1107 23:37:01.788125   33391 ssh_runner.go:195] Run: which lz4
	I1107 23:37:01.791852   33391 command_runner.go:130] > /usr/bin/lz4
	I1107 23:37:01.791957   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1107 23:37:01.792046   33391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 23:37:01.795759   33391 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:37:01.795985   33391 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:37:01.796011   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1107 23:37:03.631785   33391 crio.go:444] Took 1.839770 seconds to copy over tarball
	I1107 23:37:03.631850   33391 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 23:37:06.330164   33391 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.698292512s)
	I1107 23:37:06.330189   33391 crio.go:451] Took 2.698384 seconds to extract the tarball
	I1107 23:37:06.330200   33391 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 23:37:06.370911   33391 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:37:06.425179   33391 command_runner.go:130] > {
	I1107 23:37:06.425201   33391 command_runner.go:130] >   "images": [
	I1107 23:37:06.425209   33391 command_runner.go:130] >     {
	I1107 23:37:06.425216   33391 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1107 23:37:06.425221   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:06.425227   33391 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1107 23:37:06.425231   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425237   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:06.425245   33391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1107 23:37:06.425258   33391 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1107 23:37:06.425262   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425267   33391 command_runner.go:130] >       "size": "65258016",
	I1107 23:37:06.425273   33391 command_runner.go:130] >       "uid": null,
	I1107 23:37:06.425277   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:06.425333   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:06.425343   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:06.425346   33391 command_runner.go:130] >     },
	I1107 23:37:06.425350   33391 command_runner.go:130] >     {
	I1107 23:37:06.425356   33391 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1107 23:37:06.425361   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:06.425371   33391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 23:37:06.425377   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425382   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:06.425392   33391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1107 23:37:06.425401   33391 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1107 23:37:06.425407   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425415   33391 command_runner.go:130] >       "size": "31470524",
	I1107 23:37:06.425422   33391 command_runner.go:130] >       "uid": null,
	I1107 23:37:06.425426   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:06.425432   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:06.425436   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:06.425442   33391 command_runner.go:130] >     },
	I1107 23:37:06.425446   33391 command_runner.go:130] >     {
	I1107 23:37:06.425452   33391 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1107 23:37:06.425459   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:06.425465   33391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1107 23:37:06.425471   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425480   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:06.425491   33391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1107 23:37:06.425501   33391 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1107 23:37:06.425507   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425511   33391 command_runner.go:130] >       "size": "53621675",
	I1107 23:37:06.425518   33391 command_runner.go:130] >       "uid": null,
	I1107 23:37:06.425522   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:06.425528   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:06.425533   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:06.425538   33391 command_runner.go:130] >     },
	I1107 23:37:06.425542   33391 command_runner.go:130] >     {
	I1107 23:37:06.425548   33391 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1107 23:37:06.425554   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:06.425560   33391 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1107 23:37:06.425566   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425570   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:06.425579   33391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1107 23:37:06.425588   33391 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1107 23:37:06.425608   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425617   33391 command_runner.go:130] >       "size": "295456551",
	I1107 23:37:06.425624   33391 command_runner.go:130] >       "uid": {
	I1107 23:37:06.425628   33391 command_runner.go:130] >         "value": "0"
	I1107 23:37:06.425635   33391 command_runner.go:130] >       },
	I1107 23:37:06.425639   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:06.425645   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:06.425650   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:06.425655   33391 command_runner.go:130] >     },
	I1107 23:37:06.425659   33391 command_runner.go:130] >     {
	I1107 23:37:06.425667   33391 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1107 23:37:06.425674   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:06.425679   33391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1107 23:37:06.425685   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425689   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:06.425699   33391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1107 23:37:06.425708   33391 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1107 23:37:06.425714   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425719   33391 command_runner.go:130] >       "size": "127165392",
	I1107 23:37:06.425727   33391 command_runner.go:130] >       "uid": {
	I1107 23:37:06.425731   33391 command_runner.go:130] >         "value": "0"
	I1107 23:37:06.425738   33391 command_runner.go:130] >       },
	I1107 23:37:06.425742   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:06.425748   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:06.425752   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:06.425758   33391 command_runner.go:130] >     },
	I1107 23:37:06.425762   33391 command_runner.go:130] >     {
	I1107 23:37:06.425770   33391 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1107 23:37:06.425776   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:06.425782   33391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1107 23:37:06.425788   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425792   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:06.425803   33391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1107 23:37:06.425813   33391 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1107 23:37:06.425816   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425821   33391 command_runner.go:130] >       "size": "123188534",
	I1107 23:37:06.425827   33391 command_runner.go:130] >       "uid": {
	I1107 23:37:06.425834   33391 command_runner.go:130] >         "value": "0"
	I1107 23:37:06.425840   33391 command_runner.go:130] >       },
	I1107 23:37:06.425846   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:06.425853   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:06.425857   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:06.425863   33391 command_runner.go:130] >     },
	I1107 23:37:06.425867   33391 command_runner.go:130] >     {
	I1107 23:37:06.425875   33391 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1107 23:37:06.425881   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:06.425886   33391 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1107 23:37:06.425894   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425898   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:06.425908   33391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1107 23:37:06.425917   33391 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1107 23:37:06.425923   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425927   33391 command_runner.go:130] >       "size": "74691991",
	I1107 23:37:06.425933   33391 command_runner.go:130] >       "uid": null,
	I1107 23:37:06.425938   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:06.425946   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:06.425952   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:06.425956   33391 command_runner.go:130] >     },
	I1107 23:37:06.425962   33391 command_runner.go:130] >     {
	I1107 23:37:06.425968   33391 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1107 23:37:06.425975   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:06.425980   33391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1107 23:37:06.425986   33391 command_runner.go:130] >       ],
	I1107 23:37:06.425990   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:06.426040   33391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1107 23:37:06.426051   33391 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1107 23:37:06.426055   33391 command_runner.go:130] >       ],
	I1107 23:37:06.426059   33391 command_runner.go:130] >       "size": "61498678",
	I1107 23:37:06.426064   33391 command_runner.go:130] >       "uid": {
	I1107 23:37:06.426068   33391 command_runner.go:130] >         "value": "0"
	I1107 23:37:06.426074   33391 command_runner.go:130] >       },
	I1107 23:37:06.426082   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:06.426088   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:06.426095   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:06.426101   33391 command_runner.go:130] >     },
	I1107 23:37:06.426105   33391 command_runner.go:130] >     {
	I1107 23:37:06.426113   33391 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1107 23:37:06.426119   33391 command_runner.go:130] >       "repoTags": [
	I1107 23:37:06.426124   33391 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1107 23:37:06.426134   33391 command_runner.go:130] >       ],
	I1107 23:37:06.426141   33391 command_runner.go:130] >       "repoDigests": [
	I1107 23:37:06.426148   33391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1107 23:37:06.426157   33391 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1107 23:37:06.426163   33391 command_runner.go:130] >       ],
	I1107 23:37:06.426168   33391 command_runner.go:130] >       "size": "750414",
	I1107 23:37:06.426174   33391 command_runner.go:130] >       "uid": {
	I1107 23:37:06.426178   33391 command_runner.go:130] >         "value": "65535"
	I1107 23:37:06.426185   33391 command_runner.go:130] >       },
	I1107 23:37:06.426189   33391 command_runner.go:130] >       "username": "",
	I1107 23:37:06.426193   33391 command_runner.go:130] >       "spec": null,
	I1107 23:37:06.426200   33391 command_runner.go:130] >       "pinned": false
	I1107 23:37:06.426205   33391 command_runner.go:130] >     }
	I1107 23:37:06.426212   33391 command_runner.go:130] >   ]
	I1107 23:37:06.426215   33391 command_runner.go:130] > }
	I1107 23:37:06.427046   33391 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:37:06.427061   33391 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:37:06.427118   33391 ssh_runner.go:195] Run: crio config
	I1107 23:37:06.480027   33391 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1107 23:37:06.480053   33391 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1107 23:37:06.480065   33391 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1107 23:37:06.480072   33391 command_runner.go:130] > #
	I1107 23:37:06.480086   33391 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1107 23:37:06.480093   33391 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1107 23:37:06.480099   33391 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1107 23:37:06.480108   33391 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1107 23:37:06.480115   33391 command_runner.go:130] > # reload'.
	I1107 23:37:06.480124   33391 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1107 23:37:06.480134   33391 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1107 23:37:06.480153   33391 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1107 23:37:06.480167   33391 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1107 23:37:06.480174   33391 command_runner.go:130] > [crio]
	I1107 23:37:06.480187   33391 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1107 23:37:06.480199   33391 command_runner.go:130] > # containers images, in this directory.
	I1107 23:37:06.480211   33391 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1107 23:37:06.480227   33391 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1107 23:37:06.480239   33391 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1107 23:37:06.480252   33391 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1107 23:37:06.480261   33391 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1107 23:37:06.480268   33391 command_runner.go:130] > storage_driver = "overlay"
	I1107 23:37:06.480274   33391 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1107 23:37:06.480282   33391 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1107 23:37:06.480289   33391 command_runner.go:130] > storage_option = [
	I1107 23:37:06.480299   33391 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1107 23:37:06.480309   33391 command_runner.go:130] > ]
	I1107 23:37:06.480320   33391 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1107 23:37:06.480333   33391 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1107 23:37:06.480344   33391 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1107 23:37:06.480354   33391 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1107 23:37:06.480365   33391 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1107 23:37:06.480374   33391 command_runner.go:130] > # always happen on a node reboot
	I1107 23:37:06.480382   33391 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1107 23:37:06.480395   33391 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1107 23:37:06.480405   33391 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1107 23:37:06.480418   33391 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1107 23:37:06.480427   33391 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1107 23:37:06.480443   33391 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1107 23:37:06.480460   33391 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1107 23:37:06.480467   33391 command_runner.go:130] > # internal_wipe = true
	I1107 23:37:06.480494   33391 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1107 23:37:06.480508   33391 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1107 23:37:06.480521   33391 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1107 23:37:06.480562   33391 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1107 23:37:06.480577   33391 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1107 23:37:06.480584   33391 command_runner.go:130] > [crio.api]
	I1107 23:37:06.480598   33391 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1107 23:37:06.480609   33391 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1107 23:37:06.480621   33391 command_runner.go:130] > # IP address on which the stream server will listen.
	I1107 23:37:06.480631   33391 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1107 23:37:06.480640   33391 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1107 23:37:06.480647   33391 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1107 23:37:06.480652   33391 command_runner.go:130] > # stream_port = "0"
	I1107 23:37:06.480660   33391 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1107 23:37:06.480665   33391 command_runner.go:130] > # stream_enable_tls = false
	I1107 23:37:06.480671   33391 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1107 23:37:06.480677   33391 command_runner.go:130] > # stream_idle_timeout = ""
	I1107 23:37:06.480683   33391 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1107 23:37:06.480694   33391 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1107 23:37:06.480698   33391 command_runner.go:130] > # minutes.
	I1107 23:37:06.480705   33391 command_runner.go:130] > # stream_tls_cert = ""
	I1107 23:37:06.480711   33391 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1107 23:37:06.480719   33391 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1107 23:37:06.480726   33391 command_runner.go:130] > # stream_tls_key = ""
	I1107 23:37:06.480734   33391 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1107 23:37:06.480742   33391 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1107 23:37:06.480750   33391 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1107 23:37:06.480756   33391 command_runner.go:130] > # stream_tls_ca = ""
	I1107 23:37:06.480766   33391 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:37:06.480770   33391 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1107 23:37:06.480784   33391 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:37:06.480794   33391 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1107 23:37:06.480849   33391 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1107 23:37:06.480860   33391 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1107 23:37:06.480864   33391 command_runner.go:130] > [crio.runtime]
	I1107 23:37:06.480872   33391 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1107 23:37:06.480877   33391 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1107 23:37:06.480882   33391 command_runner.go:130] > # "nofile=1024:2048"
	I1107 23:37:06.480889   33391 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1107 23:37:06.480899   33391 command_runner.go:130] > # default_ulimits = [
	I1107 23:37:06.480905   33391 command_runner.go:130] > # ]
	I1107 23:37:06.480918   33391 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1107 23:37:06.480934   33391 command_runner.go:130] > # no_pivot = false
	I1107 23:37:06.480947   33391 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1107 23:37:06.480961   33391 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1107 23:37:06.480971   33391 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1107 23:37:06.480984   33391 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1107 23:37:06.480996   33391 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1107 23:37:06.481012   33391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:37:06.481023   33391 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1107 23:37:06.481032   33391 command_runner.go:130] > # Cgroup setting for conmon
	I1107 23:37:06.481046   33391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1107 23:37:06.481057   33391 command_runner.go:130] > conmon_cgroup = "pod"
	I1107 23:37:06.481071   33391 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1107 23:37:06.481083   33391 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1107 23:37:06.481098   33391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:37:06.481104   33391 command_runner.go:130] > conmon_env = [
	I1107 23:37:06.481111   33391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1107 23:37:06.481117   33391 command_runner.go:130] > ]
	I1107 23:37:06.481123   33391 command_runner.go:130] > # Additional environment variables to set for all the
	I1107 23:37:06.481137   33391 command_runner.go:130] > # containers. These are overridden if set in the
	I1107 23:37:06.481150   33391 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1107 23:37:06.481161   33391 command_runner.go:130] > # default_env = [
	I1107 23:37:06.481171   33391 command_runner.go:130] > # ]
	I1107 23:37:06.481181   33391 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1107 23:37:06.481190   33391 command_runner.go:130] > # selinux = false
	I1107 23:37:06.481203   33391 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1107 23:37:06.481216   33391 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1107 23:37:06.481224   33391 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1107 23:37:06.481228   33391 command_runner.go:130] > # seccomp_profile = ""
	I1107 23:37:06.481234   33391 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1107 23:37:06.481242   33391 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1107 23:37:06.481248   33391 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1107 23:37:06.481259   33391 command_runner.go:130] > # which might increase security.
	I1107 23:37:06.481269   33391 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1107 23:37:06.481280   33391 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1107 23:37:06.481294   33391 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1107 23:37:06.481308   33391 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1107 23:37:06.481323   33391 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1107 23:37:06.481336   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:37:06.481344   33391 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1107 23:37:06.481383   33391 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1107 23:37:06.481394   33391 command_runner.go:130] > # the cgroup blockio controller.
	I1107 23:37:06.481406   33391 command_runner.go:130] > # blockio_config_file = ""
	I1107 23:37:06.481417   33391 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1107 23:37:06.481423   33391 command_runner.go:130] > # irqbalance daemon.
	I1107 23:37:06.481429   33391 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1107 23:37:06.481437   33391 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1107 23:37:06.481442   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:37:06.481449   33391 command_runner.go:130] > # rdt_config_file = ""
	I1107 23:37:06.481454   33391 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1107 23:37:06.481461   33391 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1107 23:37:06.481467   33391 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1107 23:37:06.481479   33391 command_runner.go:130] > # separate_pull_cgroup = ""
	I1107 23:37:06.481487   33391 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1107 23:37:06.481496   33391 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1107 23:37:06.481503   33391 command_runner.go:130] > # will be added.
	I1107 23:37:06.481510   33391 command_runner.go:130] > # default_capabilities = [
	I1107 23:37:06.481514   33391 command_runner.go:130] > # 	"CHOWN",
	I1107 23:37:06.481520   33391 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1107 23:37:06.481524   33391 command_runner.go:130] > # 	"FSETID",
	I1107 23:37:06.481531   33391 command_runner.go:130] > # 	"FOWNER",
	I1107 23:37:06.481535   33391 command_runner.go:130] > # 	"SETGID",
	I1107 23:37:06.481541   33391 command_runner.go:130] > # 	"SETUID",
	I1107 23:37:06.481545   33391 command_runner.go:130] > # 	"SETPCAP",
	I1107 23:37:06.481552   33391 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1107 23:37:06.481556   33391 command_runner.go:130] > # 	"KILL",
	I1107 23:37:06.481560   33391 command_runner.go:130] > # ]
	I1107 23:37:06.481566   33391 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1107 23:37:06.481574   33391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:37:06.481579   33391 command_runner.go:130] > # default_sysctls = [
	I1107 23:37:06.481582   33391 command_runner.go:130] > # ]
	I1107 23:37:06.481590   33391 command_runner.go:130] > # List of devices on the host that a
	I1107 23:37:06.481596   33391 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1107 23:37:06.481608   33391 command_runner.go:130] > # allowed_devices = [
	I1107 23:37:06.481612   33391 command_runner.go:130] > # 	"/dev/fuse",
	I1107 23:37:06.481618   33391 command_runner.go:130] > # ]
	I1107 23:37:06.481623   33391 command_runner.go:130] > # List of additional devices. specified as
	I1107 23:37:06.481632   33391 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1107 23:37:06.481640   33391 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1107 23:37:06.481671   33391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:37:06.481678   33391 command_runner.go:130] > # additional_devices = [
	I1107 23:37:06.481682   33391 command_runner.go:130] > # ]
	I1107 23:37:06.481689   33391 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1107 23:37:06.481697   33391 command_runner.go:130] > # cdi_spec_dirs = [
	I1107 23:37:06.481701   33391 command_runner.go:130] > # 	"/etc/cdi",
	I1107 23:37:06.481707   33391 command_runner.go:130] > # 	"/var/run/cdi",
	I1107 23:37:06.481711   33391 command_runner.go:130] > # ]
	I1107 23:37:06.481717   33391 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1107 23:37:06.481725   33391 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1107 23:37:06.481731   33391 command_runner.go:130] > # Defaults to false.
	I1107 23:37:06.481736   33391 command_runner.go:130] > # device_ownership_from_security_context = false
	I1107 23:37:06.481746   33391 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1107 23:37:06.481755   33391 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1107 23:37:06.481759   33391 command_runner.go:130] > # hooks_dir = [
	I1107 23:37:06.481763   33391 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1107 23:37:06.481769   33391 command_runner.go:130] > # ]
	I1107 23:37:06.481774   33391 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1107 23:37:06.481783   33391 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1107 23:37:06.481788   33391 command_runner.go:130] > # its default mounts from the following two files:
	I1107 23:37:06.481794   33391 command_runner.go:130] > #
	I1107 23:37:06.481800   33391 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1107 23:37:06.481812   33391 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1107 23:37:06.481824   33391 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1107 23:37:06.481834   33391 command_runner.go:130] > #
	I1107 23:37:06.481840   33391 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1107 23:37:06.481846   33391 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1107 23:37:06.481852   33391 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1107 23:37:06.481860   33391 command_runner.go:130] > #      only add mounts it finds in this file.
	I1107 23:37:06.481863   33391 command_runner.go:130] > #
	I1107 23:37:06.481871   33391 command_runner.go:130] > # default_mounts_file = ""
	I1107 23:37:06.481879   33391 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1107 23:37:06.481885   33391 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1107 23:37:06.481889   33391 command_runner.go:130] > pids_limit = 1024
	I1107 23:37:06.481897   33391 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1107 23:37:06.481911   33391 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1107 23:37:06.481926   33391 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1107 23:37:06.481940   33391 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1107 23:37:06.481950   33391 command_runner.go:130] > # log_size_max = -1
	I1107 23:37:06.481957   33391 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1107 23:37:06.481963   33391 command_runner.go:130] > # log_to_journald = false
	I1107 23:37:06.481969   33391 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1107 23:37:06.481977   33391 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1107 23:37:06.481982   33391 command_runner.go:130] > # Path to directory for container attach sockets.
	I1107 23:37:06.481990   33391 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1107 23:37:06.481995   33391 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1107 23:37:06.482003   33391 command_runner.go:130] > # bind_mount_prefix = ""
	I1107 23:37:06.482013   33391 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1107 23:37:06.482026   33391 command_runner.go:130] > # read_only = false
	I1107 23:37:06.482038   33391 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1107 23:37:06.482051   33391 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1107 23:37:06.482061   33391 command_runner.go:130] > # live configuration reload.
	I1107 23:37:06.482065   33391 command_runner.go:130] > # log_level = "info"
	I1107 23:37:06.482091   33391 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1107 23:37:06.482098   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:37:06.482105   33391 command_runner.go:130] > # log_filter = ""
	I1107 23:37:06.482119   33391 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1107 23:37:06.482133   33391 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1107 23:37:06.482141   33391 command_runner.go:130] > # separated by comma.
	I1107 23:37:06.482151   33391 command_runner.go:130] > # uid_mappings = ""
	I1107 23:37:06.482163   33391 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1107 23:37:06.482176   33391 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1107 23:37:06.482186   33391 command_runner.go:130] > # separated by comma.
	I1107 23:37:06.482193   33391 command_runner.go:130] > # gid_mappings = ""
	I1107 23:37:06.482199   33391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1107 23:37:06.482208   33391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:37:06.482223   33391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:37:06.482234   33391 command_runner.go:130] > # minimum_mappable_uid = -1
	I1107 23:37:06.482246   33391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1107 23:37:06.482259   33391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:37:06.482273   33391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:37:06.482284   33391 command_runner.go:130] > # minimum_mappable_gid = -1
	I1107 23:37:06.482297   33391 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1107 23:37:06.482306   33391 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1107 23:37:06.482313   33391 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1107 23:37:06.482323   33391 command_runner.go:130] > # ctr_stop_timeout = 30
	I1107 23:37:06.482334   33391 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1107 23:37:06.482347   33391 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1107 23:37:06.482359   33391 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1107 23:37:06.482370   33391 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1107 23:37:06.482380   33391 command_runner.go:130] > drop_infra_ctr = false
	I1107 23:37:06.482391   33391 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1107 23:37:06.482400   33391 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1107 23:37:06.482416   33391 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1107 23:37:06.482429   33391 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1107 23:37:06.482443   33391 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1107 23:37:06.482455   33391 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1107 23:37:06.482465   33391 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1107 23:37:06.482485   33391 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1107 23:37:06.482494   33391 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1107 23:37:06.482500   33391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1107 23:37:06.482514   33391 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1107 23:37:06.482528   33391 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1107 23:37:06.482540   33391 command_runner.go:130] > # default_runtime = "runc"
	I1107 23:37:06.482554   33391 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1107 23:37:06.482570   33391 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1107 23:37:06.482590   33391 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1107 23:37:06.482597   33391 command_runner.go:130] > # creation as a file is not desired either.
	I1107 23:37:06.482610   33391 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1107 23:37:06.482623   33391 command_runner.go:130] > # the hostname is being managed dynamically.
	I1107 23:37:06.482631   33391 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1107 23:37:06.482641   33391 command_runner.go:130] > # ]
	I1107 23:37:06.482658   33391 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1107 23:37:06.482671   33391 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1107 23:37:06.482685   33391 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1107 23:37:06.482695   33391 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1107 23:37:06.482703   33391 command_runner.go:130] > #
	I1107 23:37:06.482714   33391 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1107 23:37:06.482726   33391 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1107 23:37:06.482738   33391 command_runner.go:130] > #  runtime_type = "oci"
	I1107 23:37:06.482749   33391 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1107 23:37:06.482760   33391 command_runner.go:130] > #  privileged_without_host_devices = false
	I1107 23:37:06.482770   33391 command_runner.go:130] > #  allowed_annotations = []
	I1107 23:37:06.482780   33391 command_runner.go:130] > # Where:
	I1107 23:37:06.482791   33391 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1107 23:37:06.482800   33391 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1107 23:37:06.482814   33391 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1107 23:37:06.482828   33391 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1107 23:37:06.482838   33391 command_runner.go:130] > #   in $PATH.
	I1107 23:37:06.482851   33391 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1107 23:37:06.482867   33391 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1107 23:37:06.482880   33391 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1107 23:37:06.482892   33391 command_runner.go:130] > #   state.
	I1107 23:37:06.482903   33391 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1107 23:37:06.482916   33391 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1107 23:37:06.482951   33391 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1107 23:37:06.482964   33391 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1107 23:37:06.482977   33391 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1107 23:37:06.482991   33391 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1107 23:37:06.482999   33391 command_runner.go:130] > #   The currently recognized values are:
	I1107 23:37:06.483012   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1107 23:37:06.483028   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1107 23:37:06.483039   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1107 23:37:06.483053   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1107 23:37:06.483068   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1107 23:37:06.483082   33391 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1107 23:37:06.483093   33391 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1107 23:37:06.483103   33391 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1107 23:37:06.483119   33391 command_runner.go:130] > #   should be moved to the container's cgroup
	I1107 23:37:06.483131   33391 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1107 23:37:06.483142   33391 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1107 23:37:06.483153   33391 command_runner.go:130] > runtime_type = "oci"
	I1107 23:37:06.483163   33391 command_runner.go:130] > runtime_root = "/run/runc"
	I1107 23:37:06.483174   33391 command_runner.go:130] > runtime_config_path = ""
	I1107 23:37:06.483184   33391 command_runner.go:130] > monitor_path = ""
	I1107 23:37:06.483193   33391 command_runner.go:130] > monitor_cgroup = ""
	I1107 23:37:06.483200   33391 command_runner.go:130] > monitor_exec_cgroup = ""
	I1107 23:37:06.483208   33391 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1107 23:37:06.483219   33391 command_runner.go:130] > # running containers
	I1107 23:37:06.483231   33391 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1107 23:37:06.483246   33391 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1107 23:37:06.483302   33391 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1107 23:37:06.483314   33391 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1107 23:37:06.483327   33391 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1107 23:37:06.483339   33391 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1107 23:37:06.483352   33391 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1107 23:37:06.483367   33391 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1107 23:37:06.483378   33391 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1107 23:37:06.483387   33391 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1107 23:37:06.483396   33391 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1107 23:37:06.483408   33391 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1107 23:37:06.483423   33391 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1107 23:37:06.483439   33391 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1107 23:37:06.483454   33391 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1107 23:37:06.483466   33391 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1107 23:37:06.483487   33391 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1107 23:37:06.483503   33391 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1107 23:37:06.483518   33391 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1107 23:37:06.483534   33391 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1107 23:37:06.483544   33391 command_runner.go:130] > # Example:
	I1107 23:37:06.483555   33391 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1107 23:37:06.483567   33391 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1107 23:37:06.483578   33391 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1107 23:37:06.483587   33391 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1107 23:37:06.483596   33391 command_runner.go:130] > # cpuset = 0
	I1107 23:37:06.483606   33391 command_runner.go:130] > # cpushares = "0-1"
	I1107 23:37:06.483616   33391 command_runner.go:130] > # Where:
	I1107 23:37:06.483625   33391 command_runner.go:130] > # The workload name is workload-type.
	I1107 23:37:06.483640   33391 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1107 23:37:06.483653   33391 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1107 23:37:06.483668   33391 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1107 23:37:06.483683   33391 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1107 23:37:06.483691   33391 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1107 23:37:06.483699   33391 command_runner.go:130] > # 
	I1107 23:37:06.483714   33391 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1107 23:37:06.483723   33391 command_runner.go:130] > #
	I1107 23:37:06.483733   33391 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1107 23:37:06.483748   33391 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1107 23:37:06.483761   33391 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1107 23:37:06.483772   33391 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1107 23:37:06.483783   33391 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1107 23:37:06.483793   33391 command_runner.go:130] > [crio.image]
	I1107 23:37:06.483809   33391 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1107 23:37:06.483820   33391 command_runner.go:130] > # default_transport = "docker://"
	I1107 23:37:06.483854   33391 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1107 23:37:06.483863   33391 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:37:06.483870   33391 command_runner.go:130] > # global_auth_file = ""
	I1107 23:37:06.483882   33391 command_runner.go:130] > # The image used to instantiate infra containers.
	I1107 23:37:06.483895   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:37:06.483906   33391 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1107 23:37:06.483920   33391 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1107 23:37:06.483933   33391 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:37:06.483945   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:37:06.483953   33391 command_runner.go:130] > # pause_image_auth_file = ""
	I1107 23:37:06.483963   33391 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1107 23:37:06.483977   33391 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1107 23:37:06.483988   33391 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1107 23:37:06.484001   33391 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1107 23:37:06.484012   33391 command_runner.go:130] > # pause_command = "/pause"
	I1107 23:37:06.484025   33391 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1107 23:37:06.484042   33391 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1107 23:37:06.484051   33391 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1107 23:37:06.484065   33391 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1107 23:37:06.484078   33391 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1107 23:37:06.484090   33391 command_runner.go:130] > # signature_policy = ""
	I1107 23:37:06.484103   33391 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1107 23:37:06.484114   33391 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1107 23:37:06.484121   33391 command_runner.go:130] > # changing them here.
	I1107 23:37:06.484128   33391 command_runner.go:130] > # insecure_registries = [
	I1107 23:37:06.484134   33391 command_runner.go:130] > # ]
	I1107 23:37:06.484142   33391 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1107 23:37:06.484150   33391 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1107 23:37:06.484157   33391 command_runner.go:130] > # image_volumes = "mkdir"
	I1107 23:37:06.484166   33391 command_runner.go:130] > # Temporary directory to use for storing big files
	I1107 23:37:06.484174   33391 command_runner.go:130] > # big_files_temporary_dir = ""
	I1107 23:37:06.484184   33391 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1107 23:37:06.484195   33391 command_runner.go:130] > # CNI plugins.
	I1107 23:37:06.484204   33391 command_runner.go:130] > [crio.network]
	I1107 23:37:06.484219   33391 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1107 23:37:06.484229   33391 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1107 23:37:06.484236   33391 command_runner.go:130] > # cni_default_network = ""
	I1107 23:37:06.484246   33391 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1107 23:37:06.484257   33391 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1107 23:37:06.484268   33391 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1107 23:37:06.484279   33391 command_runner.go:130] > # plugin_dirs = [
	I1107 23:37:06.484289   33391 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1107 23:37:06.484298   33391 command_runner.go:130] > # ]
	I1107 23:37:06.484311   33391 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1107 23:37:06.484320   33391 command_runner.go:130] > [crio.metrics]
	I1107 23:37:06.484329   33391 command_runner.go:130] > # Globally enable or disable metrics support.
	I1107 23:37:06.484338   33391 command_runner.go:130] > enable_metrics = true
	I1107 23:37:06.484349   33391 command_runner.go:130] > # Specify enabled metrics collectors.
	I1107 23:37:06.484360   33391 command_runner.go:130] > # Per default all metrics are enabled.
	I1107 23:37:06.484375   33391 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1107 23:37:06.484386   33391 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1107 23:37:06.484399   33391 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1107 23:37:06.484415   33391 command_runner.go:130] > # metrics_collectors = [
	I1107 23:37:06.484425   33391 command_runner.go:130] > # 	"operations",
	I1107 23:37:06.484435   33391 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1107 23:37:06.484445   33391 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1107 23:37:06.484454   33391 command_runner.go:130] > # 	"operations_errors",
	I1107 23:37:06.484458   33391 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1107 23:37:06.484464   33391 command_runner.go:130] > # 	"image_pulls_by_name",
	I1107 23:37:06.484469   33391 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1107 23:37:06.484479   33391 command_runner.go:130] > # 	"image_pulls_failures",
	I1107 23:37:06.484483   33391 command_runner.go:130] > # 	"image_pulls_successes",
	I1107 23:37:06.484490   33391 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1107 23:37:06.484494   33391 command_runner.go:130] > # 	"image_layer_reuse",
	I1107 23:37:06.484501   33391 command_runner.go:130] > # 	"containers_oom_total",
	I1107 23:37:06.484506   33391 command_runner.go:130] > # 	"containers_oom",
	I1107 23:37:06.484512   33391 command_runner.go:130] > # 	"processes_defunct",
	I1107 23:37:06.484516   33391 command_runner.go:130] > # 	"operations_total",
	I1107 23:37:06.484526   33391 command_runner.go:130] > # 	"operations_latency_seconds",
	I1107 23:37:06.484538   33391 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1107 23:37:06.484551   33391 command_runner.go:130] > # 	"operations_errors_total",
	I1107 23:37:06.484562   33391 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1107 23:37:06.484573   33391 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1107 23:37:06.484583   33391 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1107 23:37:06.484591   33391 command_runner.go:130] > # 	"image_pulls_success_total",
	I1107 23:37:06.484599   33391 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1107 23:37:06.484604   33391 command_runner.go:130] > # 	"containers_oom_count_total",
	I1107 23:37:06.484610   33391 command_runner.go:130] > # ]
	I1107 23:37:06.484615   33391 command_runner.go:130] > # The port on which the metrics server will listen.
	I1107 23:37:06.484621   33391 command_runner.go:130] > # metrics_port = 9090
	I1107 23:37:06.484627   33391 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1107 23:37:06.484633   33391 command_runner.go:130] > # metrics_socket = ""
	I1107 23:37:06.484640   33391 command_runner.go:130] > # The certificate for the secure metrics server.
	I1107 23:37:06.484647   33391 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1107 23:37:06.484655   33391 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1107 23:37:06.484662   33391 command_runner.go:130] > # certificate on any modification event.
	I1107 23:37:06.484666   33391 command_runner.go:130] > # metrics_cert = ""
	I1107 23:37:06.484674   33391 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1107 23:37:06.484683   33391 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1107 23:37:06.484688   33391 command_runner.go:130] > # metrics_key = ""
	I1107 23:37:06.484697   33391 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1107 23:37:06.484703   33391 command_runner.go:130] > [crio.tracing]
	I1107 23:37:06.484710   33391 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1107 23:37:06.484717   33391 command_runner.go:130] > # enable_tracing = false
	I1107 23:37:06.484725   33391 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1107 23:37:06.484732   33391 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1107 23:37:06.484737   33391 command_runner.go:130] > # Number of samples to collect per million spans.
	I1107 23:37:06.484744   33391 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1107 23:37:06.484754   33391 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1107 23:37:06.484764   33391 command_runner.go:130] > [crio.stats]
	I1107 23:37:06.484774   33391 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1107 23:37:06.484782   33391 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1107 23:37:06.484786   33391 command_runner.go:130] > # stats_collection_period = 0
	I1107 23:37:06.484831   33391 command_runner.go:130] ! time="2023-11-07 23:37:06.427527080Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1107 23:37:06.484850   33391 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1107 23:37:06.484927   33391 cni.go:84] Creating CNI manager for ""
	I1107 23:37:06.484943   33391 cni.go:136] 3 nodes found, recommending kindnet
	I1107 23:37:06.484960   33391 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:37:06.484982   33391 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553062 NodeName:multinode-553062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:37:06.485102   33391 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553062"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:37:06.485176   33391 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:37:06.485225   33391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:37:06.494203   33391 command_runner.go:130] > kubeadm
	I1107 23:37:06.494224   33391 command_runner.go:130] > kubectl
	I1107 23:37:06.494231   33391 command_runner.go:130] > kubelet
	I1107 23:37:06.494249   33391 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:37:06.494297   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:37:06.502515   33391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1107 23:37:06.518557   33391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:37:06.534064   33391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1107 23:37:06.549904   33391 ssh_runner.go:195] Run: grep 192.168.39.246	control-plane.minikube.internal$ /etc/hosts
	I1107 23:37:06.553465   33391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:37:06.565458   33391 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062 for IP: 192.168.39.246
	I1107 23:37:06.565483   33391 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:06.565627   33391 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1107 23:37:06.565666   33391 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1107 23:37:06.565735   33391 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key
	I1107 23:37:06.565788   33391 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key.4f23f264
	I1107 23:37:06.565826   33391 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.key
	I1107 23:37:06.565838   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 23:37:06.565852   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 23:37:06.565866   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 23:37:06.565879   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 23:37:06.565891   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:37:06.565903   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:37:06.565914   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:37:06.565926   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:37:06.565977   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1107 23:37:06.566004   33391 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1107 23:37:06.566014   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:37:06.566039   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:37:06.566063   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:37:06.566084   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1107 23:37:06.566137   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:37:06.566163   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem -> /usr/share/ca-certificates/16848.pem
	I1107 23:37:06.566180   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /usr/share/ca-certificates/168482.pem
	I1107 23:37:06.566195   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:37:06.566854   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:37:06.589860   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:37:06.611954   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:37:06.633584   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 23:37:06.655317   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:37:06.677572   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:37:06.700294   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:37:06.724060   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 23:37:06.746227   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1107 23:37:06.769132   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1107 23:37:06.791210   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:37:06.813102   33391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:37:06.835096   33391 ssh_runner.go:195] Run: openssl version
	I1107 23:37:06.840331   33391 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1107 23:37:06.840593   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1107 23:37:06.850349   33391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1107 23:37:06.855008   33391 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:37:06.855025   33391 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:37:06.855062   33391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1107 23:37:06.860590   33391 command_runner.go:130] > 51391683
	I1107 23:37:06.860666   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1107 23:37:06.870593   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1107 23:37:06.880599   33391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1107 23:37:06.885518   33391 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:37:06.885542   33391 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:37:06.885577   33391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1107 23:37:06.891359   33391 command_runner.go:130] > 3ec20f2e
	I1107 23:37:06.891434   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:37:06.901639   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:37:06.911416   33391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:37:06.916050   33391 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:37:06.916077   33391 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:37:06.916115   33391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:37:06.921596   33391 command_runner.go:130] > b5213941
	I1107 23:37:06.921722   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:37:06.931846   33391 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:37:06.936062   33391 command_runner.go:130] > ca.crt
	I1107 23:37:06.936077   33391 command_runner.go:130] > ca.key
	I1107 23:37:06.936085   33391 command_runner.go:130] > healthcheck-client.crt
	I1107 23:37:06.936092   33391 command_runner.go:130] > healthcheck-client.key
	I1107 23:37:06.936108   33391 command_runner.go:130] > peer.crt
	I1107 23:37:06.936114   33391 command_runner.go:130] > peer.key
	I1107 23:37:06.936121   33391 command_runner.go:130] > server.crt
	I1107 23:37:06.936125   33391 command_runner.go:130] > server.key
	I1107 23:37:06.936245   33391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1107 23:37:06.942260   33391 command_runner.go:130] > Certificate will not expire
	I1107 23:37:06.942372   33391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1107 23:37:06.948206   33391 command_runner.go:130] > Certificate will not expire
	I1107 23:37:06.948266   33391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1107 23:37:06.954112   33391 command_runner.go:130] > Certificate will not expire
	I1107 23:37:06.954174   33391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1107 23:37:06.959535   33391 command_runner.go:130] > Certificate will not expire
	I1107 23:37:06.959813   33391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1107 23:37:06.965604   33391 command_runner.go:130] > Certificate will not expire
	I1107 23:37:06.965670   33391 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1107 23:37:06.971301   33391 command_runner.go:130] > Certificate will not expire
	I1107 23:37:06.971487   33391 kubeadm.go:404] StartCluster: {Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.201 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fals
e istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:37:06.971592   33391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:37:06.971644   33391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:37:07.013099   33391 cri.go:89] found id: ""
	I1107 23:37:07.013174   33391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:37:07.023557   33391 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1107 23:37:07.023573   33391 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1107 23:37:07.023579   33391 command_runner.go:130] > /var/lib/minikube/etcd:
	I1107 23:37:07.023583   33391 command_runner.go:130] > member
	I1107 23:37:07.023600   33391 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1107 23:37:07.023609   33391 kubeadm.go:636] restartCluster start
	I1107 23:37:07.023657   33391 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 23:37:07.033047   33391 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:07.033543   33391 kubeconfig.go:92] found "multinode-553062" server: "https://192.168.39.246:8443"
	I1107 23:37:07.033928   33391 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:37:07.034157   33391 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:37:07.034705   33391 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 23:37:07.034915   33391 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 23:37:07.043176   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:07.043215   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:07.053755   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:07.053774   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:07.053818   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:07.064368   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:07.565093   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:07.565171   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:07.576244   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:08.065445   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:08.183221   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:08.194708   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:08.565217   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:08.565326   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:08.577279   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:09.064775   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:09.064877   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:09.076433   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:09.564855   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:09.564930   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:09.576033   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:10.064571   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:10.064640   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:10.075998   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:10.564608   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:10.564687   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:10.575909   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:11.065016   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:11.065095   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:11.076072   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:11.564615   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:11.564709   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:11.576152   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:12.064855   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:12.064936   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:12.076329   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:12.564528   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:12.564633   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:12.575994   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:13.065440   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:13.065543   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:13.076508   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:13.564525   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:13.564593   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:13.575968   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:14.064527   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:14.064613   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:14.075896   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:14.564467   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:14.564570   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:14.576147   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:15.064660   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:15.064758   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:15.076070   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:15.564606   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:15.564691   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:15.575518   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:16.065469   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:16.065541   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:16.076711   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:16.565283   33391 api_server.go:166] Checking apiserver status ...
	I1107 23:37:16.565349   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:37:16.576690   33391 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:37:17.043382   33391 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1107 23:37:17.043427   33391 kubeadm.go:1128] stopping kube-system containers ...
	I1107 23:37:17.043441   33391 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1107 23:37:17.043489   33391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:37:17.081318   33391 cri.go:89] found id: ""
	I1107 23:37:17.081379   33391 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 23:37:17.096889   33391 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:37:17.105776   33391 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1107 23:37:17.105796   33391 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1107 23:37:17.105805   33391 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1107 23:37:17.105816   33391 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:37:17.105848   33391 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:37:17.105903   33391 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:37:17.114352   33391 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 23:37:17.114371   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:37:17.211349   33391 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:37:17.211947   33391 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1107 23:37:17.212524   33391 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1107 23:37:17.213867   33391 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 23:37:17.214528   33391 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1107 23:37:17.215088   33391 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1107 23:37:17.215980   33391 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1107 23:37:17.216634   33391 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1107 23:37:17.217240   33391 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1107 23:37:17.217842   33391 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 23:37:17.218404   33391 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 23:37:17.219113   33391 command_runner.go:130] > [certs] Using the existing "sa" key
	I1107 23:37:17.220514   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:37:18.064549   33391 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:37:18.064570   33391 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:37:18.064577   33391 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:37:18.064582   33391 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:37:18.064592   33391 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:37:18.064841   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:37:18.249172   33391 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:37:18.249205   33391 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:37:18.249213   33391 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 23:37:18.249241   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:37:18.318365   33391 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:37:18.318385   33391 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:37:18.320935   33391 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:37:18.321980   33391 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:37:18.324009   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:37:18.398454   33391 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:37:18.403324   33391 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:37:18.403400   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:37:18.421938   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:37:18.938260   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:37:19.438102   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:37:19.938139   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:37:20.438095   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:37:20.937641   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:37:20.966119   33391 command_runner.go:130] > 1127
	I1107 23:37:20.966302   33391 api_server.go:72] duration metric: took 2.562989809s to wait for apiserver process to appear ...
	I1107 23:37:20.966321   33391 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:37:20.966343   33391 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1107 23:37:24.605283   33391 api_server.go:279] https://192.168.39.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 23:37:24.605311   33391 api_server.go:103] status: https://192.168.39.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 23:37:24.605330   33391 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1107 23:37:24.646210   33391 api_server.go:279] https://192.168.39.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 23:37:24.646237   33391 api_server.go:103] status: https://192.168.39.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 23:37:25.146956   33391 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1107 23:37:25.152699   33391 api_server.go:279] https://192.168.39.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1107 23:37:25.152776   33391 api_server.go:103] status: https://192.168.39.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1107 23:37:25.647374   33391 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1107 23:37:25.654416   33391 api_server.go:279] https://192.168.39.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1107 23:37:25.654453   33391 api_server.go:103] status: https://192.168.39.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1107 23:37:26.146319   33391 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1107 23:37:26.152075   33391 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1107 23:37:26.152166   33391 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1107 23:37:26.152174   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:26.152182   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:26.152188   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:26.160489   33391 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1107 23:37:26.160508   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:26.160515   33391 round_trippers.go:580]     Content-Length: 264
	I1107 23:37:26.160520   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:26 GMT
	I1107 23:37:26.160528   33391 round_trippers.go:580]     Audit-Id: dbb57776-5dfa-42bb-9316-8226d648e156
	I1107 23:37:26.160537   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:26.160552   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:26.160560   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:26.160572   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:26.160593   33391 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1107 23:37:26.160688   33391 api_server.go:141] control plane version: v1.28.3
	I1107 23:37:26.160705   33391 api_server.go:131] duration metric: took 5.194377285s to wait for apiserver health ...
	I1107 23:37:26.160714   33391 cni.go:84] Creating CNI manager for ""
	I1107 23:37:26.160720   33391 cni.go:136] 3 nodes found, recommending kindnet
	I1107 23:37:26.162886   33391 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:37:26.164393   33391 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:37:26.175541   33391 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 23:37:26.175581   33391 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1107 23:37:26.175594   33391 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1107 23:37:26.175605   33391 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:37:26.175614   33391 command_runner.go:130] > Access: 2023-11-07 23:36:53.922905698 +0000
	I1107 23:37:26.175621   33391 command_runner.go:130] > Modify: 2023-11-07 07:42:50.000000000 +0000
	I1107 23:37:26.175636   33391 command_runner.go:130] > Change: 2023-11-07 23:36:52.115905698 +0000
	I1107 23:37:26.175642   33391 command_runner.go:130] >  Birth: -
	I1107 23:37:26.175708   33391 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:37:26.175722   33391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:37:26.204370   33391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:37:27.412571   33391 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:37:27.420077   33391 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:37:27.423663   33391 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1107 23:37:27.437445   33391 command_runner.go:130] > daemonset.apps/kindnet configured
	I1107 23:37:27.440273   33391 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.235871267s)
	I1107 23:37:27.440319   33391 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:37:27.440447   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:37:27.440459   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.440467   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.440473   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.444029   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:27.444050   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.444060   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.444081   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.444095   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.444104   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.444117   33391 round_trippers.go:580]     Audit-Id: 2338059a-397c-48fd-8ae6-f33bf9f796d6
	I1107 23:37:27.444131   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.445581   33391 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"770"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83175 chars]
	I1107 23:37:27.449530   33391 system_pods.go:59] 12 kube-system pods found
	I1107 23:37:27.449561   33391 system_pods.go:61] "coredns-5dd5756b68-6ggfr" [785c6064-d793-4959-8e34-28b4fc2549fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:37:27.449569   33391 system_pods.go:61] "etcd-multinode-553062" [3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 23:37:27.449576   33391 system_pods.go:61] "kindnet-4v85d" [4e2275f3-7b2e-4a79-9d52-645f8f85f574] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1107 23:37:27.449585   33391 system_pods.go:61] "kindnet-9stvx" [a9981d59-dbff-456f-9024-2754c2a9d0c6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1107 23:37:27.449589   33391 system_pods.go:61] "kindnet-g8624" [61ab7168-2e63-4b3f-ab3d-b407952d7b06] Running
	I1107 23:37:27.449595   33391 system_pods.go:61] "kube-apiserver-multinode-553062" [30896fa0-3d8f-4861-bdf5-ad94796ad097] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1107 23:37:27.449613   33391 system_pods.go:61] "kube-controller-manager-multinode-553062" [5a895945-b908-44ba-a1c8-93245f6a93f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 23:37:27.449627   33391 system_pods.go:61] "kube-proxy-944rz" [db20b1cf-b422-4649-a6e1-4549c4c56f33] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1107 23:37:27.449631   33391 system_pods.go:61] "kube-proxy-rktlk" [92ea69ee-cd72-4594-a338-9837cc44e5a1] Running
	I1107 23:37:27.449635   33391 system_pods.go:61] "kube-proxy-xwp5j" [0347e6b5-3070-4b6a-ae2b-d1ac56a385cd] Running
	I1107 23:37:27.449639   33391 system_pods.go:61] "kube-scheduler-multinode-553062" [334a75af-c6cb-45ac-a020-8afc3f4a4e7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1107 23:37:27.449646   33391 system_pods.go:61] "storage-provisioner" [85179396-d02a-404a-a93e-e10db8c673b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1107 23:37:27.449654   33391 system_pods.go:74] duration metric: took 9.325784ms to wait for pod list to return data ...
	I1107 23:37:27.449667   33391 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:37:27.449717   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1107 23:37:27.449724   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.449731   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.449737   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.452728   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:27.452742   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.452749   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.452754   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.452760   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.452764   33391 round_trippers.go:580]     Audit-Id: e4cb37b4-75dc-4656-9a30-bacd0840763e
	I1107 23:37:27.452769   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.452774   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.452998   33391 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"770"},"items":[{"metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15252 chars]
	I1107 23:37:27.453966   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:37:27.453990   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:37:27.453999   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:37:27.454006   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:37:27.454010   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:37:27.454014   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:37:27.454017   33391 node_conditions.go:105] duration metric: took 4.346136ms to run NodePressure ...
	I1107 23:37:27.454036   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:37:27.621266   33391 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1107 23:37:27.677636   33391 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1107 23:37:27.679101   33391 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1107 23:37:27.679217   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1107 23:37:27.679228   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.679239   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.679249   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.682139   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:27.682158   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.682169   33391 round_trippers.go:580]     Audit-Id: 9df29304-015e-458d-b722-71438dcc8105
	I1107 23:37:27.682179   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.682187   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.682192   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.682203   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.682209   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.682801   33391 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"774"},"items":[{"metadata":{"name":"etcd-multinode-553062","namespace":"kube-system","uid":"3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1","resourceVersion":"761","creationTimestamp":"2023-11-07T23:26:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.246:2379","kubernetes.io/config.hash":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.mirror":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.seen":"2023-11-07T23:26:48.362630200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I1107 23:37:27.684112   33391 kubeadm.go:787] kubelet initialised
	I1107 23:37:27.684136   33391 kubeadm.go:788] duration metric: took 5.012128ms waiting for restarted kubelet to initialise ...
	I1107 23:37:27.684145   33391 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:37:27.684221   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:37:27.684235   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.684246   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.684256   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.687807   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:27.687829   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.687838   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.687846   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.687854   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.687872   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.687878   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.687883   33391 round_trippers.go:580]     Audit-Id: 7f0a4bd3-a0be-4daa-8ba0-727b4d145741
	I1107 23:37:27.689075   33391 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"774"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82674 chars]
	I1107 23:37:27.691615   33391 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:27.691694   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:27.691704   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.691712   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.691718   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.695406   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:27.695424   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.695431   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.695439   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.695448   33391 round_trippers.go:580]     Audit-Id: f7aacb3b-a930-4ad2-b643-8a50c31981d7
	I1107 23:37:27.695456   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.695465   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.695473   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.695593   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1107 23:37:27.695965   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:27.695976   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.695982   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.695988   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.700032   33391 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:37:27.700049   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.700056   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.700062   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.700073   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.700078   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.700083   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.700088   33391 round_trippers.go:580]     Audit-Id: 2fc0eb3f-5674-4ad0-9fb5-f3768eb27b35
	I1107 23:37:27.700437   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1107 23:37:27.700707   33391 pod_ready.go:97] node "multinode-553062" hosting pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:27.700726   33391 pod_ready.go:81] duration metric: took 9.090463ms waiting for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	E1107 23:37:27.700734   33391 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553062" hosting pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:27.700742   33391 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:27.700784   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553062
	I1107 23:37:27.700791   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.700798   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.700804   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.702950   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:27.702963   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.702968   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.702974   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.702979   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.702987   33391 round_trippers.go:580]     Audit-Id: f545dfd8-77cd-4ef7-bb25-45fd5951fe69
	I1107 23:37:27.702992   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.702997   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.703230   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553062","namespace":"kube-system","uid":"3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1","resourceVersion":"761","creationTimestamp":"2023-11-07T23:26:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.246:2379","kubernetes.io/config.hash":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.mirror":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.seen":"2023-11-07T23:26:48.362630200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1107 23:37:27.703539   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:27.703550   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.703556   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.703562   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.706400   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:27.706418   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.706426   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.706432   33391 round_trippers.go:580]     Audit-Id: 64ad9fa4-bdd3-44a2-a2a9-8cb4279edd7f
	I1107 23:37:27.706437   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.706442   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.706447   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.706452   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.706624   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1107 23:37:27.706906   33391 pod_ready.go:97] node "multinode-553062" hosting pod "etcd-multinode-553062" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:27.706921   33391 pod_ready.go:81] duration metric: took 6.17268ms waiting for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	E1107 23:37:27.706929   33391 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553062" hosting pod "etcd-multinode-553062" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:27.706945   33391 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:27.706994   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553062
	I1107 23:37:27.707003   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.707010   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.707020   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.708974   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:37:27.708986   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.708991   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.708997   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.709001   33391 round_trippers.go:580]     Audit-Id: 3dcf22ea-005e-46bb-8ecb-1ad0d58a81c9
	I1107 23:37:27.709006   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.709011   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.709016   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.709314   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553062","namespace":"kube-system","uid":"30896fa0-3d8f-4861-bdf5-ad94796ad097","resourceVersion":"762","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.246:8443","kubernetes.io/config.hash":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.mirror":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.seen":"2023-11-07T23:26:57.103263110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1107 23:37:27.709646   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:27.709656   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.709662   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.709668   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.711952   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:27.711965   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.711971   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.711976   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.711985   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.711990   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.711995   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.712000   33391 round_trippers.go:580]     Audit-Id: f307ffc2-3f33-4390-a7b2-6cfd1acc3b41
	I1107 23:37:27.712539   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1107 23:37:27.712802   33391 pod_ready.go:97] node "multinode-553062" hosting pod "kube-apiserver-multinode-553062" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:27.712828   33391 pod_ready.go:81] duration metric: took 5.876829ms waiting for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	E1107 23:37:27.712839   33391 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553062" hosting pod "kube-apiserver-multinode-553062" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:27.712851   33391 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:27.712890   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553062
	I1107 23:37:27.712898   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.712905   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.712910   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.720639   33391 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1107 23:37:27.720654   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.720661   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.720666   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.720671   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.720679   33391 round_trippers.go:580]     Audit-Id: 0b3d7ff6-ed66-4f30-b962-d7394c2d9076
	I1107 23:37:27.720685   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.720690   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.720842   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553062","namespace":"kube-system","uid":"5a895945-b908-44ba-a1c8-93245f6a93f5","resourceVersion":"763","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.mirror":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.seen":"2023-11-07T23:26:57.103264314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1107 23:37:27.840472   33391 request.go:629] Waited for 119.188971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:27.840547   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:27.840558   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:27.840581   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:27.840593   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:27.844003   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:27.844022   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:27.844029   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:27.844034   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:27.844039   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:27.844044   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:27.844049   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:27 GMT
	I1107 23:37:27.844054   33391 round_trippers.go:580]     Audit-Id: 16febde4-b633-4eea-8a18-07e95ede532b
	I1107 23:37:27.844257   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1107 23:37:27.844615   33391 pod_ready.go:97] node "multinode-553062" hosting pod "kube-controller-manager-multinode-553062" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:27.844645   33391 pod_ready.go:81] duration metric: took 131.785414ms waiting for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	E1107 23:37:27.844657   33391 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553062" hosting pod "kube-controller-manager-multinode-553062" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:27.844665   33391 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:28.041036   33391 request.go:629] Waited for 196.297798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:37:28.041101   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:37:28.041106   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:28.041114   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:28.041127   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:28.043854   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:28.043869   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:28.043875   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:28.043880   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:28.043885   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:28.043890   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:28 GMT
	I1107 23:37:28.043898   33391 round_trippers.go:580]     Audit-Id: 702b73b3-f4f4-48b3-8a1f-1bcc5486cfd5
	I1107 23:37:28.043906   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:28.044132   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-944rz","generateName":"kube-proxy-","namespace":"kube-system","uid":"db20b1cf-b422-4649-a6e1-4549c4c56f33","resourceVersion":"772","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1107 23:37:28.240798   33391 request.go:629] Waited for 196.278478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:28.240882   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:28.240889   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:28.240903   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:28.240918   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:28.243361   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:28.243375   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:28.243381   33391 round_trippers.go:580]     Audit-Id: 0747a713-a4f0-49b4-a13b-dd310f6449a2
	I1107 23:37:28.243387   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:28.243392   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:28.243398   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:28.243406   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:28.243414   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:28 GMT
	I1107 23:37:28.243555   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1107 23:37:28.243851   33391 pod_ready.go:97] node "multinode-553062" hosting pod "kube-proxy-944rz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:28.243866   33391 pod_ready.go:81] duration metric: took 399.189145ms waiting for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	E1107 23:37:28.243874   33391 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553062" hosting pod "kube-proxy-944rz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:28.243884   33391 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:28.441292   33391 request.go:629] Waited for 197.34854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:37:28.441357   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:37:28.441362   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:28.441369   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:28.441375   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:28.444476   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:28.444495   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:28.444501   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:28.444507   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:28 GMT
	I1107 23:37:28.444512   33391 round_trippers.go:580]     Audit-Id: d6b384bf-bdc4-4c38-90c9-ea7be593c9ba
	I1107 23:37:28.444517   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:28.444522   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:28.444527   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:28.445713   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rktlk","generateName":"kube-proxy-","namespace":"kube-system","uid":"92ea69ee-cd72-4594-a338-9837cc44e5a1","resourceVersion":"479","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1107 23:37:28.641431   33391 request.go:629] Waited for 195.333899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:37:28.641500   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:37:28.641519   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:28.641529   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:28.641535   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:28.644550   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:28.644567   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:28.644572   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:28 GMT
	I1107 23:37:28.644578   33391 round_trippers.go:580]     Audit-Id: f38d1c52-287e-4a95-a942-a6dcd2b15e74
	I1107 23:37:28.644582   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:28.644590   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:28.644598   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:28.644606   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:28.644886   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"757","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3684 chars]
	I1107 23:37:28.645146   33391 pod_ready.go:92] pod "kube-proxy-rktlk" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:28.645160   33391 pod_ready.go:81] duration metric: took 401.269572ms waiting for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:28.645171   33391 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xwp5j" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:28.840531   33391 request.go:629] Waited for 195.287256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwp5j
	I1107 23:37:28.840609   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwp5j
	I1107 23:37:28.840614   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:28.840629   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:28.840639   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:28.843569   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:28.843588   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:28.843597   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:28.843606   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:28.843613   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:28.843621   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:28.843633   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:28 GMT
	I1107 23:37:28.843644   33391 round_trippers.go:580]     Audit-Id: b8de2739-636f-46dc-ad2f-9620a8ec32bd
	I1107 23:37:28.843786   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xwp5j","generateName":"kube-proxy-","namespace":"kube-system","uid":"0347e6b5-3070-4b6a-ae2b-d1ac56a385cd","resourceVersion":"691","creationTimestamp":"2023-11-07T23:28:45Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:28:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1107 23:37:29.040649   33391 request.go:629] Waited for 196.313112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:37:29.040753   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:37:29.040759   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:29.040767   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:29.040779   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:29.043955   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:29.043983   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:29.043993   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:29.044001   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:29.044010   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:29.044019   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:29 GMT
	I1107 23:37:29.044028   33391 round_trippers.go:580]     Audit-Id: 599ebb4e-3523-4399-97d8-303a1c83035f
	I1107 23:37:29.044036   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:29.044241   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m03","uid":"c69b0e89-b34f-4710-b818-78e5076041aa","resourceVersion":"714","creationTimestamp":"2023-11-07T23:29:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:29:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1107 23:37:29.044624   33391 pod_ready.go:92] pod "kube-proxy-xwp5j" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:29.044654   33391 pod_ready.go:81] duration metric: took 399.475691ms waiting for pod "kube-proxy-xwp5j" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:29.044668   33391 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:29.241220   33391 request.go:629] Waited for 196.40497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:29.241289   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:29.241297   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:29.241308   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:29.241323   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:29.244395   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:29.244419   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:29.244429   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:29.244438   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:29 GMT
	I1107 23:37:29.244447   33391 round_trippers.go:580]     Audit-Id: 129b45d9-04dc-4249-8595-10995fcab210
	I1107 23:37:29.244455   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:29.244463   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:29.244471   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:29.244678   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"760","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1107 23:37:29.440505   33391 request.go:629] Waited for 195.288204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:29.440569   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:29.440578   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:29.440597   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:29.440610   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:29.443708   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:29.443732   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:29.443742   33391 round_trippers.go:580]     Audit-Id: 040a4c6b-7e4d-43b0-b1bb-ae2c6a0f58fe
	I1107 23:37:29.443755   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:29.443764   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:29.443771   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:29.443779   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:29.443790   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:29 GMT
	I1107 23:37:29.443959   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1107 23:37:29.444290   33391 pod_ready.go:97] node "multinode-553062" hosting pod "kube-scheduler-multinode-553062" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:29.444311   33391 pod_ready.go:81] duration metric: took 399.623916ms waiting for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	E1107 23:37:29.444323   33391 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553062" hosting pod "kube-scheduler-multinode-553062" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553062" has status "Ready":"False"
	I1107 23:37:29.444336   33391 pod_ready.go:38] duration metric: took 1.760176771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:37:29.444356   33391 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:37:29.461378   33391 command_runner.go:130] > -16
	I1107 23:37:29.461414   33391 ops.go:34] apiserver oom_adj: -16
	I1107 23:37:29.461422   33391 kubeadm.go:640] restartCluster took 22.437807271s
	I1107 23:37:29.461432   33391 kubeadm.go:406] StartCluster complete in 22.489961041s
	I1107 23:37:29.461454   33391 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:29.461551   33391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:37:29.462458   33391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:37:29.462737   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:37:29.462759   33391 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:37:29.465702   33391 out.go:177] * Enabled addons: 
	I1107 23:37:29.462975   33391 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:37:29.463084   33391 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:37:29.467711   33391 addons.go:502] enable addons completed in 4.955206ms: enabled=[]
	I1107 23:37:29.466109   33391 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:37:29.467991   33391 round_trippers.go:463] GET https://192.168.39.246:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:37:29.468001   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:29.468009   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:29.468017   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:29.470400   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:29.470415   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:29.470433   33391 round_trippers.go:580]     Content-Length: 291
	I1107 23:37:29.470443   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:29 GMT
	I1107 23:37:29.470452   33391 round_trippers.go:580]     Audit-Id: 1c8aa113-e2e9-4426-9392-79632e7f165a
	I1107 23:37:29.470464   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:29.470472   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:29.470487   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:29.470496   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:29.470514   33391 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"99a4298f-5274-4bac-956d-86f8091a0b82","resourceVersion":"771","creationTimestamp":"2023-11-07T23:26:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1107 23:37:29.470630   33391 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553062" context rescaled to 1 replicas
	I1107 23:37:29.470653   33391 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:37:29.473223   33391 out.go:177] * Verifying Kubernetes components...
	I1107 23:37:29.474710   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:37:29.566175   33391 command_runner.go:130] > apiVersion: v1
	I1107 23:37:29.566196   33391 command_runner.go:130] > data:
	I1107 23:37:29.566203   33391 command_runner.go:130] >   Corefile: |
	I1107 23:37:29.566207   33391 command_runner.go:130] >     .:53 {
	I1107 23:37:29.566211   33391 command_runner.go:130] >         log
	I1107 23:37:29.566216   33391 command_runner.go:130] >         errors
	I1107 23:37:29.566226   33391 command_runner.go:130] >         health {
	I1107 23:37:29.566231   33391 command_runner.go:130] >            lameduck 5s
	I1107 23:37:29.566235   33391 command_runner.go:130] >         }
	I1107 23:37:29.566240   33391 command_runner.go:130] >         ready
	I1107 23:37:29.566246   33391 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1107 23:37:29.566250   33391 command_runner.go:130] >            pods insecure
	I1107 23:37:29.566258   33391 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1107 23:37:29.566262   33391 command_runner.go:130] >            ttl 30
	I1107 23:37:29.566266   33391 command_runner.go:130] >         }
	I1107 23:37:29.566271   33391 command_runner.go:130] >         prometheus :9153
	I1107 23:37:29.566277   33391 command_runner.go:130] >         hosts {
	I1107 23:37:29.566284   33391 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1107 23:37:29.566288   33391 command_runner.go:130] >            fallthrough
	I1107 23:37:29.566293   33391 command_runner.go:130] >         }
	I1107 23:37:29.566297   33391 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1107 23:37:29.566302   33391 command_runner.go:130] >            max_concurrent 1000
	I1107 23:37:29.566306   33391 command_runner.go:130] >         }
	I1107 23:37:29.566310   33391 command_runner.go:130] >         cache 30
	I1107 23:37:29.566317   33391 command_runner.go:130] >         loop
	I1107 23:37:29.566322   33391 command_runner.go:130] >         reload
	I1107 23:37:29.566326   33391 command_runner.go:130] >         loadbalance
	I1107 23:37:29.566335   33391 command_runner.go:130] >     }
	I1107 23:37:29.566339   33391 command_runner.go:130] > kind: ConfigMap
	I1107 23:37:29.566343   33391 command_runner.go:130] > metadata:
	I1107 23:37:29.566348   33391 command_runner.go:130] >   creationTimestamp: "2023-11-07T23:26:56Z"
	I1107 23:37:29.566352   33391 command_runner.go:130] >   name: coredns
	I1107 23:37:29.566357   33391 command_runner.go:130] >   namespace: kube-system
	I1107 23:37:29.566362   33391 command_runner.go:130] >   resourceVersion: "365"
	I1107 23:37:29.566369   33391 command_runner.go:130] >   uid: f4ddf0dd-b180-495a-83b0-8d6d546a8bca
	I1107 23:37:29.568719   33391 node_ready.go:35] waiting up to 6m0s for node "multinode-553062" to be "Ready" ...
	I1107 23:37:29.568869   33391 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 23:37:29.641090   33391 request.go:629] Waited for 72.248894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:29.641136   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:29.641141   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:29.641149   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:29.641158   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:29.644648   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:29.644669   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:29.644681   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:29.644688   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:29.644695   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:29.644703   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:29.644711   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:29 GMT
	I1107 23:37:29.644721   33391 round_trippers.go:580]     Audit-Id: ac1b70e3-298d-4782-b802-22e27812569c
	I1107 23:37:29.644952   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1107 23:37:29.840902   33391 request.go:629] Waited for 195.375584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:29.840967   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:29.840972   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:29.840982   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:29.840992   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:29.845334   33391 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:37:29.845359   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:29.845371   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:29 GMT
	I1107 23:37:29.845379   33391 round_trippers.go:580]     Audit-Id: d9c759c4-d100-4bf1-8e0c-c70077cf3a01
	I1107 23:37:29.845387   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:29.845395   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:29.845403   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:29.845412   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:29.847377   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1107 23:37:30.348440   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:30.348469   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:30.348495   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:30.348504   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:30.351587   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:30.351606   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:30.351617   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:30 GMT
	I1107 23:37:30.351623   33391 round_trippers.go:580]     Audit-Id: 917ad7b5-0633-4f3d-b386-eda424f579fd
	I1107 23:37:30.351629   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:30.351636   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:30.351644   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:30.351652   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:30.352731   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"722","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1107 23:37:30.848332   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:30.848357   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:30.848365   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:30.848371   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:30.851363   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:30.851384   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:30.851394   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:30 GMT
	I1107 23:37:30.851406   33391 round_trippers.go:580]     Audit-Id: 6176d0fd-a4d9-4741-91d7-49bc2f2d834e
	I1107 23:37:30.851414   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:30.851427   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:30.851435   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:30.851446   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:30.851619   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:30.851940   33391 node_ready.go:49] node "multinode-553062" has status "Ready":"True"
	I1107 23:37:30.851964   33391 node_ready.go:38] duration metric: took 1.283217721s waiting for node "multinode-553062" to be "Ready" ...
	I1107 23:37:30.851978   33391 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:37:30.852045   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:37:30.852053   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:30.852063   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:30.852076   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:30.859654   33391 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1107 23:37:30.859671   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:30.859681   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:30 GMT
	I1107 23:37:30.859689   33391 round_trippers.go:580]     Audit-Id: e0bb98bc-2429-48d1-8318-f41e259b625f
	I1107 23:37:30.859696   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:30.859709   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:30.859722   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:30.859731   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:30.862235   33391 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"837"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82993 chars]
	I1107 23:37:30.864716   33391 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:30.864790   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:30.864801   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:30.864811   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:30.864836   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:30.867172   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:30.867186   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:30.867192   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:30.867198   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:30 GMT
	I1107 23:37:30.867203   33391 round_trippers.go:580]     Audit-Id: 706d3c88-21d6-4a52-9ed4-61fbaed6d29b
	I1107 23:37:30.867208   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:30.867213   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:30.867221   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:30.867463   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1107 23:37:30.867880   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:30.867895   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:30.867902   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:30.867911   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:30.871260   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:30.871280   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:30.871289   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:30.871298   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:30.871314   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:30.871327   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:30.871343   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:30 GMT
	I1107 23:37:30.871357   33391 round_trippers.go:580]     Audit-Id: 761536c3-8221-4b9c-b197-67664a2eef3a
	I1107 23:37:30.871503   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:31.041266   33391 request.go:629] Waited for 169.324558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:31.041347   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:31.041358   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:31.041370   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:31.041381   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:31.044000   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:31.044021   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:31.044032   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:31.044041   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:31.044053   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:31.044061   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:31.044071   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:31 GMT
	I1107 23:37:31.044083   33391 round_trippers.go:580]     Audit-Id: 02daa1de-be7a-4919-b4d0-7ef1c909f071
	I1107 23:37:31.044267   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1107 23:37:31.240905   33391 request.go:629] Waited for 196.186131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:31.240963   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:31.240971   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:31.240985   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:31.240999   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:31.243639   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:31.243663   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:31.243672   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:31.243680   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:31 GMT
	I1107 23:37:31.243689   33391 round_trippers.go:580]     Audit-Id: a443647d-6b30-4dfd-8898-ef5babaf6bb8
	I1107 23:37:31.243701   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:31.243710   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:31.243723   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:31.243903   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:31.744965   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:31.744995   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:31.745005   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:31.745013   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:31.747480   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:31.747496   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:31.747504   33391 round_trippers.go:580]     Audit-Id: 9ef9147e-a7cb-4172-8bf4-d5eac116472c
	I1107 23:37:31.747512   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:31.747520   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:31.747530   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:31.747549   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:31.747564   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:31 GMT
	I1107 23:37:31.748017   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1107 23:37:31.748444   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:31.748458   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:31.748465   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:31.748472   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:31.750747   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:31.750764   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:31.750773   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:31.750781   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:31.750788   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:31.750801   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:31.750809   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:31 GMT
	I1107 23:37:31.750821   33391 round_trippers.go:580]     Audit-Id: 39aca762-061b-4bbc-921a-7eaf9da0d810
	I1107 23:37:31.750986   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:32.244621   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:32.244652   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:32.244661   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:32.244669   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:32.248030   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:32.248055   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:32.248063   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:32.248071   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:32.248076   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:32 GMT
	I1107 23:37:32.248091   33391 round_trippers.go:580]     Audit-Id: cdf083ef-2d1b-4194-a8e6-f1ed54d6f22a
	I1107 23:37:32.248100   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:32.248108   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:32.248847   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1107 23:37:32.249272   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:32.249285   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:32.249292   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:32.249299   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:32.251353   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:32.251366   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:32.251371   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:32.251376   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:32.251381   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:32.251389   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:32.251397   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:32 GMT
	I1107 23:37:32.251410   33391 round_trippers.go:580]     Audit-Id: c327accb-52c1-4f46-b776-ac4785fcbe29
	I1107 23:37:32.251686   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:32.745431   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:32.745457   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:32.745465   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:32.745471   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:32.748406   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:32.748424   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:32.748430   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:32 GMT
	I1107 23:37:32.748436   33391 round_trippers.go:580]     Audit-Id: 17a163d5-2745-4338-aa2a-d4135c032178
	I1107 23:37:32.748444   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:32.748453   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:32.748464   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:32.748476   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:32.749313   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1107 23:37:32.749771   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:32.749787   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:32.749797   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:32.749809   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:32.751793   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:37:32.751812   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:32.751821   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:32.751830   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:32.751837   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:32.751845   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:32 GMT
	I1107 23:37:32.751853   33391 round_trippers.go:580]     Audit-Id: 4f85f10a-804b-4269-90ee-7a2a12a40b2d
	I1107 23:37:32.751864   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:32.752094   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:33.244992   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:33.245018   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:33.245031   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:33.245044   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:33.248264   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:33.248281   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:33.248287   33391 round_trippers.go:580]     Audit-Id: fafea479-bc2a-4ac9-96ce-4f10c3eb535b
	I1107 23:37:33.248293   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:33.248313   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:33.248320   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:33.248327   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:33.248335   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:33 GMT
	I1107 23:37:33.248487   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1107 23:37:33.248989   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:33.249003   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:33.249010   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:33.249016   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:33.250988   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:37:33.251005   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:33.251014   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:33.251021   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:33.251029   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:33.251037   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:33.251045   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:33 GMT
	I1107 23:37:33.251064   33391 round_trippers.go:580]     Audit-Id: 86373651-fdbd-408f-8f75-81f67ab570aa
	I1107 23:37:33.251365   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:33.251637   33391 pod_ready.go:102] pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace has status "Ready":"False"
	I1107 23:37:33.745069   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:33.745092   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:33.745100   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:33.745106   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:33.747995   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:33.748020   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:33.748030   33391 round_trippers.go:580]     Audit-Id: 19effde7-bef6-4211-8bcb-dfcd425bec72
	I1107 23:37:33.748039   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:33.748046   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:33.748055   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:33.748063   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:33.748077   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:33 GMT
	I1107 23:37:33.748303   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1107 23:37:33.748923   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:33.748946   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:33.748957   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:33.748968   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:33.751627   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:33.751646   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:33.751655   33391 round_trippers.go:580]     Audit-Id: 268c1f0d-bdc9-4e06-9fb0-1c72805da12d
	I1107 23:37:33.751663   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:33.751670   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:33.751677   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:33.751689   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:33.751699   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:33 GMT
	I1107 23:37:33.751958   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:34.244613   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:34.244642   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:34.244654   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:34.244663   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:34.247735   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:34.247757   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:34.247779   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:34.247788   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:34 GMT
	I1107 23:37:34.247795   33391 round_trippers.go:580]     Audit-Id: aa6c6ad1-f649-4f37-8ce4-88e5c53866d2
	I1107 23:37:34.247802   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:34.247809   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:34.247817   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:34.248357   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"759","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1107 23:37:34.248829   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:34.248845   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:34.248854   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:34.248868   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:34.254830   33391 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 23:37:34.254852   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:34.254861   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:34.254870   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:34.254878   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:34 GMT
	I1107 23:37:34.254893   33391 round_trippers.go:580]     Audit-Id: 39a81151-8fa8-41d5-9739-3815a9bc1b1e
	I1107 23:37:34.254906   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:34.254913   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:34.255536   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:34.745045   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:37:34.745071   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:34.745082   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:34.745091   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:34.747770   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:34.747794   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:34.747803   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:34 GMT
	I1107 23:37:34.747810   33391 round_trippers.go:580]     Audit-Id: b6852c92-3476-4cf7-a623-ae1d7d71c909
	I1107 23:37:34.747817   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:34.747823   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:34.747830   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:34.747840   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:34.748245   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"848","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1107 23:37:34.748684   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:34.748700   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:34.748709   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:34.748718   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:34.750689   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:37:34.750708   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:34.750717   33391 round_trippers.go:580]     Audit-Id: f5f8bcef-9e45-42fb-8e0b-49a520f89283
	I1107 23:37:34.750726   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:34.750735   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:34.750747   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:34.750755   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:34.750766   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:34 GMT
	I1107 23:37:34.751077   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:34.751378   33391 pod_ready.go:92] pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:34.751396   33391 pod_ready.go:81] duration metric: took 3.886660395s waiting for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:34.751407   33391 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:34.751458   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553062
	I1107 23:37:34.751472   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:34.751483   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:34.751489   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:34.754750   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:34.754769   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:34.754778   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:34.754786   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:34 GMT
	I1107 23:37:34.754794   33391 round_trippers.go:580]     Audit-Id: 2abd257c-d476-45b7-876c-284cfe9dec95
	I1107 23:37:34.754803   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:34.754812   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:34.754822   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:34.755126   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553062","namespace":"kube-system","uid":"3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1","resourceVersion":"839","creationTimestamp":"2023-11-07T23:26:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.246:2379","kubernetes.io/config.hash":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.mirror":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.seen":"2023-11-07T23:26:48.362630200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1107 23:37:34.755554   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:34.755569   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:34.755578   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:34.755588   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:34.758094   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:34.758114   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:34.758139   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:34.758147   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:34 GMT
	I1107 23:37:34.758154   33391 round_trippers.go:580]     Audit-Id: cbb9d36e-82e2-4697-bdd2-49e6a1779cc8
	I1107 23:37:34.758162   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:34.758170   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:34.758177   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:34.758390   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:34.758679   33391 pod_ready.go:92] pod "etcd-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:34.758695   33391 pod_ready.go:81] duration metric: took 7.280405ms waiting for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:34.758731   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:34.758787   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553062
	I1107 23:37:34.758797   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:34.758807   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:34.758817   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:34.760887   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:34.760905   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:34.760913   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:34.760921   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:34.760933   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:34 GMT
	I1107 23:37:34.760946   33391 round_trippers.go:580]     Audit-Id: 6892d58b-155c-400a-9b8f-a102d3de5a03
	I1107 23:37:34.760953   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:34.760963   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:34.761312   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553062","namespace":"kube-system","uid":"30896fa0-3d8f-4861-bdf5-ad94796ad097","resourceVersion":"841","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.246:8443","kubernetes.io/config.hash":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.mirror":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.seen":"2023-11-07T23:26:57.103263110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1107 23:37:34.840988   33391 request.go:629] Waited for 79.22445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:34.841074   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:34.841085   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:34.841098   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:34.841116   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:34.843275   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:34.843288   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:34.843294   33391 round_trippers.go:580]     Audit-Id: 83c29f4f-7c59-401e-8307-51dfecbb713a
	I1107 23:37:34.843300   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:34.843313   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:34.843322   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:34.843327   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:34.843337   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:34 GMT
	I1107 23:37:34.843508   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:34.843803   33391 pod_ready.go:92] pod "kube-apiserver-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:34.843817   33391 pod_ready.go:81] duration metric: took 85.074509ms waiting for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:34.843826   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:35.041217   33391 request.go:629] Waited for 197.338749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553062
	I1107 23:37:35.041307   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553062
	I1107 23:37:35.041318   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:35.041340   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:35.041354   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:35.044025   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:35.044042   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:35.044049   33391 round_trippers.go:580]     Audit-Id: 1c06ea1e-0010-4200-bef5-be30ef7dea1f
	I1107 23:37:35.044054   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:35.044059   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:35.044064   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:35.044069   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:35.044075   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:35 GMT
	I1107 23:37:35.044321   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553062","namespace":"kube-system","uid":"5a895945-b908-44ba-a1c8-93245f6a93f5","resourceVersion":"842","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.mirror":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.seen":"2023-11-07T23:26:57.103264314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1107 23:37:35.241088   33391 request.go:629] Waited for 196.342409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:35.241163   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:35.241170   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:35.241179   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:35.241189   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:35.243991   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:35.244009   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:35.244029   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:35.244038   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:35.244046   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:35.244056   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:35 GMT
	I1107 23:37:35.244066   33391 round_trippers.go:580]     Audit-Id: f95d83e4-1e40-44e8-a422-d56bef5f5549
	I1107 23:37:35.244075   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:35.244618   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:35.244934   33391 pod_ready.go:92] pod "kube-controller-manager-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:35.244953   33391 pod_ready.go:81] duration metric: took 401.117622ms waiting for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:35.244970   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:35.441413   33391 request.go:629] Waited for 196.370823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:37:35.441490   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:37:35.441500   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:35.441511   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:35.441529   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:35.444164   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:35.444184   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:35.444193   33391 round_trippers.go:580]     Audit-Id: e99afd6f-ce11-4f54-ae1b-76a37fd9c59b
	I1107 23:37:35.444201   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:35.444210   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:35.444223   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:35.444236   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:35.444246   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:35 GMT
	I1107 23:37:35.444415   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-944rz","generateName":"kube-proxy-","namespace":"kube-system","uid":"db20b1cf-b422-4649-a6e1-4549c4c56f33","resourceVersion":"772","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1107 23:37:35.641245   33391 request.go:629] Waited for 196.339039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:35.641341   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:35.641353   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:35.641364   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:35.641375   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:35.644163   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:35.644183   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:35.644198   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:35.644203   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:35.644208   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:35.644222   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:35.644227   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:35 GMT
	I1107 23:37:35.644232   33391 round_trippers.go:580]     Audit-Id: c9185983-a234-4827-815c-e7db06c96f58
	I1107 23:37:35.644476   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:35.644803   33391 pod_ready.go:92] pod "kube-proxy-944rz" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:35.644828   33391 pod_ready.go:81] duration metric: took 399.85176ms waiting for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:35.644838   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:35.841260   33391 request.go:629] Waited for 196.35828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:37:35.841332   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:37:35.841340   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:35.841352   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:35.841365   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:35.844548   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:35.844571   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:35.844582   33391 round_trippers.go:580]     Audit-Id: a857e59d-c423-4b4c-9146-41b851203eeb
	I1107 23:37:35.844601   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:35.844607   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:35.844612   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:35.844617   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:35.844622   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:35 GMT
	I1107 23:37:35.844807   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rktlk","generateName":"kube-proxy-","namespace":"kube-system","uid":"92ea69ee-cd72-4594-a338-9837cc44e5a1","resourceVersion":"479","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1107 23:37:36.040582   33391 request.go:629] Waited for 195.272397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:37:36.040692   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:37:36.040707   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:36.040718   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:36.040729   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:36.043906   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:36.043923   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:36.043930   33391 round_trippers.go:580]     Audit-Id: b97e5f6f-c036-4a9f-b0e0-f3269f7ea08e
	I1107 23:37:36.043935   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:36.043941   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:36.043948   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:36.043964   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:36.043976   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:36 GMT
	I1107 23:37:36.044571   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3","resourceVersion":"757","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3684 chars]
	I1107 23:37:36.044889   33391 pod_ready.go:92] pod "kube-proxy-rktlk" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:36.044911   33391 pod_ready.go:81] duration metric: took 400.06662ms waiting for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:36.044924   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xwp5j" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:36.241323   33391 request.go:629] Waited for 196.334076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwp5j
	I1107 23:37:36.241420   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwp5j
	I1107 23:37:36.241447   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:36.241459   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:36.241470   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:36.244444   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:36.244464   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:36.244473   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:36.244480   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:36.244489   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:36 GMT
	I1107 23:37:36.244503   33391 round_trippers.go:580]     Audit-Id: 13893369-09e2-465c-a5f4-9c3d124d3f8a
	I1107 23:37:36.244513   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:36.244532   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:36.244683   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xwp5j","generateName":"kube-proxy-","namespace":"kube-system","uid":"0347e6b5-3070-4b6a-ae2b-d1ac56a385cd","resourceVersion":"691","creationTimestamp":"2023-11-07T23:28:45Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:28:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1107 23:37:36.441457   33391 request.go:629] Waited for 196.340248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:37:36.441522   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:37:36.441528   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:36.441535   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:36.441541   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:36.444189   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:36.444200   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:36.444206   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:36.444212   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:36.444225   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:36 GMT
	I1107 23:37:36.444235   33391 round_trippers.go:580]     Audit-Id: cd5d5205-80b9-4636-a13e-680583a585a3
	I1107 23:37:36.444241   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:36.444246   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:36.444417   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m03","uid":"c69b0e89-b34f-4710-b818-78e5076041aa","resourceVersion":"714","creationTimestamp":"2023-11-07T23:29:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:29:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1107 23:37:36.444778   33391 pod_ready.go:92] pod "kube-proxy-xwp5j" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:36.444802   33391 pod_ready.go:81] duration metric: took 399.867021ms waiting for pod "kube-proxy-xwp5j" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:36.444828   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:36.641240   33391 request.go:629] Waited for 196.332107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:36.641301   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:36.641311   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:36.641322   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:36.641334   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:36.644038   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:36.644057   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:36.644062   33391 round_trippers.go:580]     Audit-Id: 2cd0ea71-3445-4301-a9ac-6c25db064469
	I1107 23:37:36.644068   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:36.644073   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:36.644078   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:36.644085   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:36.644093   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:36 GMT
	I1107 23:37:36.644305   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"760","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1107 23:37:36.841005   33391 request.go:629] Waited for 196.327731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:36.841076   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:36.841081   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:36.841088   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:36.841097   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:36.854968   33391 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1107 23:37:36.854994   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:36.855004   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:36.855010   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:36.855019   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:36 GMT
	I1107 23:37:36.855027   33391 round_trippers.go:580]     Audit-Id: 00bea259-1ac8-4d40-be2f-befb613b01e6
	I1107 23:37:36.855035   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:36.855045   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:36.855312   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:37.040783   33391 request.go:629] Waited for 185.15033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:37.040881   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:37.040890   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:37.040902   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:37.040917   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:37.044370   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:37.044398   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:37.044409   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:37.044418   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:37 GMT
	I1107 23:37:37.044427   33391 round_trippers.go:580]     Audit-Id: 10e68671-fac7-4d10-8ecc-c3afcab73fd8
	I1107 23:37:37.044436   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:37.044445   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:37.044470   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:37.044607   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"760","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1107 23:37:37.241322   33391 request.go:629] Waited for 196.335758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:37.241381   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:37.241386   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:37.241394   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:37.241425   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:37.244054   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:37.244072   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:37.244080   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:37.244095   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:37.244105   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:37.244113   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:37.244120   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:37 GMT
	I1107 23:37:37.244125   33391 round_trippers.go:580]     Audit-Id: 71803b65-7bf6-4b54-a014-dd112061a3f8
	I1107 23:37:37.244296   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:37.745105   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:37.745136   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:37.745147   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:37.745156   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:37.747884   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:37.747903   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:37.747910   33391 round_trippers.go:580]     Audit-Id: a8309d09-c5ff-4a3a-a099-a44ec0b86f23
	I1107 23:37:37.747915   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:37.747920   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:37.747925   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:37.747930   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:37.747935   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:37 GMT
	I1107 23:37:37.748578   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"760","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1107 23:37:37.749003   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:37.749019   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:37.749026   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:37.749032   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:37.751107   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:37.751123   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:37.751133   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:37 GMT
	I1107 23:37:37.751142   33391 round_trippers.go:580]     Audit-Id: 04bd8366-9046-4135-a7ea-c0dacd7af96b
	I1107 23:37:37.751150   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:37.751160   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:37.751169   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:37.751185   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:37.751575   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:38.245342   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:38.245367   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:38.245378   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:38.245387   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:38.248160   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:38.248183   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:38.248193   33391 round_trippers.go:580]     Audit-Id: 0229ef52-644b-4d2b-a1f7-aa1d23b045d7
	I1107 23:37:38.248201   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:38.248212   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:38.248252   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:38.248267   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:38.248278   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:38 GMT
	I1107 23:37:38.248455   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"760","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1107 23:37:38.248841   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:38.248857   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:38.248868   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:38.248876   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:38.251270   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:38.251289   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:38.251298   33391 round_trippers.go:580]     Audit-Id: 552bda60-d3ae-4952-8ed8-49708525098a
	I1107 23:37:38.251305   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:38.251313   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:38.251325   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:38.251333   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:38.251341   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:38 GMT
	I1107 23:37:38.251648   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:38.745253   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:38.745287   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:38.745295   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:38.745301   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:38.748708   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:38.748731   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:38.748740   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:38 GMT
	I1107 23:37:38.748747   33391 round_trippers.go:580]     Audit-Id: a773f8dc-58b1-4deb-8655-d75239c5b862
	I1107 23:37:38.748754   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:38.748761   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:38.748768   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:38.748777   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:38.749134   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"760","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1107 23:37:38.749499   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:38.749513   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:38.749520   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:38.749526   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:38.751658   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:38.751682   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:38.751692   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:38.751701   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:38.751716   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:38.751726   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:38 GMT
	I1107 23:37:38.751732   33391 round_trippers.go:580]     Audit-Id: 4c816708-03ab-4d25-83fe-21aa78367aba
	I1107 23:37:38.751740   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:38.751904   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:38.752305   33391 pod_ready.go:102] pod "kube-scheduler-multinode-553062" in "kube-system" namespace has status "Ready":"False"
	I1107 23:37:39.245601   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:37:39.245631   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:39.245644   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:39.245653   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:39.248092   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:39.248123   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:39.248133   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:39.248144   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:39.248152   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:39.248163   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:39 GMT
	I1107 23:37:39.248175   33391 round_trippers.go:580]     Audit-Id: 9f2addfc-2536-4840-8807-434133832d97
	I1107 23:37:39.248184   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:39.248451   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"870","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1107 23:37:39.248846   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:37:39.248862   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:39.248872   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:39.248881   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:39.251237   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:39.251258   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:39.251267   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:39.251276   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:39.251284   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:39.251296   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:39.251307   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:39 GMT
	I1107 23:37:39.251315   33391 round_trippers.go:580]     Audit-Id: 0b1a1add-bef3-4978-938a-c96c7fa6ea53
	I1107 23:37:39.251670   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1107 23:37:39.252059   33391 pod_ready.go:92] pod "kube-scheduler-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:37:39.252079   33391 pod_ready.go:81] duration metric: took 2.80723853s waiting for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:37:39.252091   33391 pod_ready.go:38] duration metric: took 8.400093941s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:37:39.252110   33391 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:37:39.252165   33391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:37:39.269563   33391 command_runner.go:130] > 1127
	I1107 23:37:39.269587   33391 api_server.go:72] duration metric: took 9.798913225s to wait for apiserver process to appear ...
	I1107 23:37:39.269595   33391 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:37:39.269608   33391 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1107 23:37:39.275071   33391 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1107 23:37:39.275127   33391 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I1107 23:37:39.275134   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:39.275142   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:39.275150   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:39.276527   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:37:39.276563   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:39.276571   33391 round_trippers.go:580]     Content-Length: 264
	I1107 23:37:39.276577   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:39 GMT
	I1107 23:37:39.276584   33391 round_trippers.go:580]     Audit-Id: a111617d-d01a-4ef1-8553-da6b706b4e54
	I1107 23:37:39.276593   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:39.276598   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:39.276606   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:39.276614   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:39.276642   33391 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1107 23:37:39.276687   33391 api_server.go:141] control plane version: v1.28.3
	I1107 23:37:39.276698   33391 api_server.go:131] duration metric: took 7.09923ms to wait for apiserver health ...
	I1107 23:37:39.276707   33391 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:37:39.276757   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:37:39.276763   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:39.276769   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:39.276778   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:39.280210   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:37:39.280232   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:39.280242   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:39 GMT
	I1107 23:37:39.280251   33391 round_trippers.go:580]     Audit-Id: c4b10cc6-b516-4341-9899-d179d8ff7bd2
	I1107 23:37:39.280259   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:39.280267   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:39.280275   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:39.280283   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:39.282563   33391 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"870"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"848","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81883 chars]
	I1107 23:37:39.284987   33391 system_pods.go:59] 12 kube-system pods found
	I1107 23:37:39.285010   33391 system_pods.go:61] "coredns-5dd5756b68-6ggfr" [785c6064-d793-4959-8e34-28b4fc2549fc] Running
	I1107 23:37:39.285015   33391 system_pods.go:61] "etcd-multinode-553062" [3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1] Running
	I1107 23:37:39.285021   33391 system_pods.go:61] "kindnet-4v85d" [4e2275f3-7b2e-4a79-9d52-645f8f85f574] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1107 23:37:39.285027   33391 system_pods.go:61] "kindnet-9stvx" [a9981d59-dbff-456f-9024-2754c2a9d0c6] Running
	I1107 23:37:39.285033   33391 system_pods.go:61] "kindnet-g8624" [61ab7168-2e63-4b3f-ab3d-b407952d7b06] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1107 23:37:39.285038   33391 system_pods.go:61] "kube-apiserver-multinode-553062" [30896fa0-3d8f-4861-bdf5-ad94796ad097] Running
	I1107 23:37:39.285043   33391 system_pods.go:61] "kube-controller-manager-multinode-553062" [5a895945-b908-44ba-a1c8-93245f6a93f5] Running
	I1107 23:37:39.285047   33391 system_pods.go:61] "kube-proxy-944rz" [db20b1cf-b422-4649-a6e1-4549c4c56f33] Running
	I1107 23:37:39.285051   33391 system_pods.go:61] "kube-proxy-rktlk" [92ea69ee-cd72-4594-a338-9837cc44e5a1] Running
	I1107 23:37:39.285054   33391 system_pods.go:61] "kube-proxy-xwp5j" [0347e6b5-3070-4b6a-ae2b-d1ac56a385cd] Running
	I1107 23:37:39.285060   33391 system_pods.go:61] "kube-scheduler-multinode-553062" [334a75af-c6cb-45ac-a020-8afc3f4a4e7a] Running
	I1107 23:37:39.285063   33391 system_pods.go:61] "storage-provisioner" [85179396-d02a-404a-a93e-e10db8c673b6] Running
	I1107 23:37:39.285068   33391 system_pods.go:74] duration metric: took 8.355023ms to wait for pod list to return data ...
	I1107 23:37:39.285076   33391 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:37:39.285121   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:37:39.285130   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:39.285136   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:39.285142   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:39.287070   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:37:39.287085   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:39.287095   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:39.287103   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:39.287111   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:39.287124   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:39.287133   33391 round_trippers.go:580]     Content-Length: 261
	I1107 23:37:39.287146   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:39 GMT
	I1107 23:37:39.287159   33391 round_trippers.go:580]     Audit-Id: 5ac5867c-265a-4e1d-bd2f-2fc30c5a19ef
	I1107 23:37:39.287180   33391 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"870"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6fff4cd8-da06-46a3-88c6-f639ebaea0a1","resourceVersion":"312","creationTimestamp":"2023-11-07T23:27:10Z"}}]}
	I1107 23:37:39.287331   33391 default_sa.go:45] found service account: "default"
	I1107 23:37:39.287347   33391 default_sa.go:55] duration metric: took 2.265787ms for default service account to be created ...
	I1107 23:37:39.287355   33391 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:37:39.440748   33391 request.go:629] Waited for 153.335977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:37:39.440799   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:37:39.440804   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:39.440827   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:39.440836   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:39.445340   33391 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:37:39.445374   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:39.445384   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:39.445392   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:39 GMT
	I1107 23:37:39.445400   33391 round_trippers.go:580]     Audit-Id: 9018a13c-0b3a-43fe-b51f-dad7916a7d70
	I1107 23:37:39.445407   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:39.445416   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:39.445423   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:39.446844   33391 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"870"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"848","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81883 chars]
	I1107 23:37:39.450376   33391 system_pods.go:86] 12 kube-system pods found
	I1107 23:37:39.450404   33391 system_pods.go:89] "coredns-5dd5756b68-6ggfr" [785c6064-d793-4959-8e34-28b4fc2549fc] Running
	I1107 23:37:39.450412   33391 system_pods.go:89] "etcd-multinode-553062" [3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1] Running
	I1107 23:37:39.450436   33391 system_pods.go:89] "kindnet-4v85d" [4e2275f3-7b2e-4a79-9d52-645f8f85f574] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1107 23:37:39.450451   33391 system_pods.go:89] "kindnet-9stvx" [a9981d59-dbff-456f-9024-2754c2a9d0c6] Running
	I1107 23:37:39.450462   33391 system_pods.go:89] "kindnet-g8624" [61ab7168-2e63-4b3f-ab3d-b407952d7b06] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1107 23:37:39.450474   33391 system_pods.go:89] "kube-apiserver-multinode-553062" [30896fa0-3d8f-4861-bdf5-ad94796ad097] Running
	I1107 23:37:39.450485   33391 system_pods.go:89] "kube-controller-manager-multinode-553062" [5a895945-b908-44ba-a1c8-93245f6a93f5] Running
	I1107 23:37:39.450495   33391 system_pods.go:89] "kube-proxy-944rz" [db20b1cf-b422-4649-a6e1-4549c4c56f33] Running
	I1107 23:37:39.450503   33391 system_pods.go:89] "kube-proxy-rktlk" [92ea69ee-cd72-4594-a338-9837cc44e5a1] Running
	I1107 23:37:39.450511   33391 system_pods.go:89] "kube-proxy-xwp5j" [0347e6b5-3070-4b6a-ae2b-d1ac56a385cd] Running
	I1107 23:37:39.450521   33391 system_pods.go:89] "kube-scheduler-multinode-553062" [334a75af-c6cb-45ac-a020-8afc3f4a4e7a] Running
	I1107 23:37:39.450530   33391 system_pods.go:89] "storage-provisioner" [85179396-d02a-404a-a93e-e10db8c673b6] Running
	I1107 23:37:39.450542   33391 system_pods.go:126] duration metric: took 163.1813ms to wait for k8s-apps to be running ...
	I1107 23:37:39.450554   33391 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:37:39.450607   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:37:39.465143   33391 system_svc.go:56] duration metric: took 14.583225ms WaitForService to wait for kubelet.
	I1107 23:37:39.465169   33391 kubeadm.go:581] duration metric: took 9.994493945s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:37:39.465190   33391 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:37:39.640514   33391 request.go:629] Waited for 175.253159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1107 23:37:39.640561   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1107 23:37:39.640565   33391 round_trippers.go:469] Request Headers:
	I1107 23:37:39.640572   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:37:39.640579   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:37:39.643479   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:37:39.643502   33391 round_trippers.go:577] Response Headers:
	I1107 23:37:39.643512   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:37:39.643521   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:37:39 GMT
	I1107 23:37:39.643530   33391 round_trippers.go:580]     Audit-Id: c3709fa5-0d78-4f50-8d93-42126b8157ab
	I1107 23:37:39.643535   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:37:39.643541   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:37:39.643546   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:37:39.644117   33391 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"870"},"items":[{"metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"837","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15076 chars]
	I1107 23:37:39.644902   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:37:39.644925   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:37:39.644940   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:37:39.644946   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:37:39.644953   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:37:39.644959   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:37:39.644973   33391 node_conditions.go:105] duration metric: took 179.778875ms to run NodePressure ...
	I1107 23:37:39.644985   33391 start.go:228] waiting for startup goroutines ...
	I1107 23:37:39.644995   33391 start.go:233] waiting for cluster config update ...
	I1107 23:37:39.645012   33391 start.go:242] writing updated cluster config ...
	I1107 23:37:39.645537   33391 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:37:39.645651   33391 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:37:39.648464   33391 out.go:177] * Starting worker node multinode-553062-m02 in cluster multinode-553062
	I1107 23:37:39.650126   33391 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:37:39.650145   33391 cache.go:56] Caching tarball of preloaded images
	I1107 23:37:39.650204   33391 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:37:39.650214   33391 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:37:39.650285   33391 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:37:39.650439   33391 start.go:365] acquiring machines lock for multinode-553062-m02: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:37:39.650474   33391 start.go:369] acquired machines lock for "multinode-553062-m02" in 18.712µs
	I1107 23:37:39.650486   33391 start.go:96] Skipping create...Using existing machine configuration
	I1107 23:37:39.650494   33391 fix.go:54] fixHost starting: m02
	I1107 23:37:39.650724   33391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:37:39.650743   33391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:37:39.664709   33391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1107 23:37:39.665151   33391 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:37:39.665593   33391 main.go:141] libmachine: Using API Version  1
	I1107 23:37:39.665617   33391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:37:39.665961   33391 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:37:39.666156   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:37:39.666320   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetState
	I1107 23:37:39.668194   33391 fix.go:102] recreateIfNeeded on multinode-553062-m02: state=Running err=<nil>
	W1107 23:37:39.668210   33391 fix.go:128] unexpected machine state, will restart: <nil>
	I1107 23:37:39.670287   33391 out.go:177] * Updating the running kvm2 "multinode-553062-m02" VM ...
	I1107 23:37:39.671935   33391 machine.go:88] provisioning docker machine ...
	I1107 23:37:39.671953   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:37:39.672160   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetMachineName
	I1107 23:37:39.672295   33391 buildroot.go:166] provisioning hostname "multinode-553062-m02"
	I1107 23:37:39.672326   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetMachineName
	I1107 23:37:39.672468   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:37:39.674633   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:39.675114   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:37:39.675155   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:39.675312   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:37:39.675496   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:37:39.675656   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:37:39.675850   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:37:39.676028   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:37:39.676334   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:37:39.676348   33391 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553062-m02 && echo "multinode-553062-m02" | sudo tee /etc/hostname
	I1107 23:37:39.808674   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553062-m02
	
	I1107 23:37:39.808703   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:37:39.811514   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:39.811900   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:37:39.811935   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:39.812074   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:37:39.812261   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:37:39.812438   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:37:39.812543   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:37:39.812718   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:37:39.813097   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:37:39.813124   33391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553062-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553062-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553062-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:37:39.929584   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:37:39.929616   33391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1107 23:37:39.929630   33391 buildroot.go:174] setting up certificates
	I1107 23:37:39.929638   33391 provision.go:83] configureAuth start
	I1107 23:37:39.929646   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetMachineName
	I1107 23:37:39.929905   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetIP
	I1107 23:37:39.932903   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:39.933262   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:37:39.933291   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:39.933460   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:37:39.935763   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:39.936184   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:37:39.936207   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:39.936321   33391 provision.go:138] copyHostCerts
	I1107 23:37:39.936353   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:37:39.936388   33391 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1107 23:37:39.936401   33391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:37:39.936496   33391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1107 23:37:39.936591   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:37:39.936616   33391 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1107 23:37:39.936626   33391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:37:39.936665   33391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1107 23:37:39.936730   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:37:39.936756   33391 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1107 23:37:39.936766   33391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:37:39.936800   33391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1107 23:37:39.936883   33391 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.multinode-553062-m02 san=[192.168.39.137 192.168.39.137 localhost 127.0.0.1 minikube multinode-553062-m02]
	I1107 23:37:40.056298   33391 provision.go:172] copyRemoteCerts
	I1107 23:37:40.056350   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:37:40.056371   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:37:40.058870   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:40.059261   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:37:40.059294   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:40.059447   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:37:40.059628   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:37:40.059767   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:37:40.059898   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa Username:docker}
	I1107 23:37:40.146672   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:37:40.146733   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:37:40.170311   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:37:40.170369   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1107 23:37:40.192525   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:37:40.192589   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:37:40.215895   33391 provision.go:86] duration metric: configureAuth took 286.245541ms
	I1107 23:37:40.215922   33391 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:37:40.216129   33391 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:37:40.216208   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:37:40.219117   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:40.219469   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:37:40.219523   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:37:40.219650   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:37:40.219841   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:37:40.220028   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:37:40.220158   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:37:40.220339   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:37:40.220640   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:37:40.220655   33391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:39:10.833044   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:39:10.833069   33391 machine.go:91] provisioned docker machine in 1m31.161120422s
	I1107 23:39:10.833080   33391 start.go:300] post-start starting for "multinode-553062-m02" (driver="kvm2")
	I1107 23:39:10.833090   33391 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:39:10.833105   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:39:10.833405   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:39:10.833453   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:39:10.836642   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:10.837070   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:39:10.837131   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:10.837318   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:39:10.837489   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:39:10.837656   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:39:10.837828   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa Username:docker}
	I1107 23:39:10.926911   33391 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:39:10.931370   33391 command_runner.go:130] > NAME=Buildroot
	I1107 23:39:10.931395   33391 command_runner.go:130] > VERSION=2021.02.12-1-gb75713b-dirty
	I1107 23:39:10.931403   33391 command_runner.go:130] > ID=buildroot
	I1107 23:39:10.931410   33391 command_runner.go:130] > VERSION_ID=2021.02.12
	I1107 23:39:10.931417   33391 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1107 23:39:10.931720   33391 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:39:10.931740   33391 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1107 23:39:10.931812   33391 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1107 23:39:10.931912   33391 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1107 23:39:10.931926   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /etc/ssl/certs/168482.pem
	I1107 23:39:10.932043   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:39:10.940906   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:39:10.964077   33391 start.go:303] post-start completed in 130.985195ms
	I1107 23:39:10.964103   33391 fix.go:56] fixHost completed within 1m31.313603974s
	I1107 23:39:10.964126   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:39:10.966725   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:10.967047   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:39:10.967090   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:10.967221   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:39:10.967428   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:39:10.967599   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:39:10.967710   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:39:10.967862   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:39:10.968188   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I1107 23:39:10.968203   33391 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:39:11.081479   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699400351.075293451
	
	I1107 23:39:11.081502   33391 fix.go:206] guest clock: 1699400351.075293451
	I1107 23:39:11.081511   33391 fix.go:219] Guest: 2023-11-07 23:39:11.075293451 +0000 UTC Remote: 2023-11-07 23:39:10.964108317 +0000 UTC m=+448.075590982 (delta=111.185134ms)
	I1107 23:39:11.081531   33391 fix.go:190] guest clock delta is within tolerance: 111.185134ms
	I1107 23:39:11.081537   33391 start.go:83] releasing machines lock for "multinode-553062-m02", held for 1m31.431054484s
	I1107 23:39:11.081558   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:39:11.081807   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetIP
	I1107 23:39:11.084390   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:11.084852   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:39:11.084881   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:11.087219   33391 out.go:177] * Found network options:
	I1107 23:39:11.088749   33391 out.go:177]   - NO_PROXY=192.168.39.246
	W1107 23:39:11.090095   33391 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 23:39:11.090120   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:39:11.090628   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:39:11.090812   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:39:11.090891   33391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:39:11.090930   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	W1107 23:39:11.090994   33391 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 23:39:11.091080   33391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:39:11.091106   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:39:11.093292   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:11.093666   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:39:11.093694   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:11.093717   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:11.093829   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:39:11.094027   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:39:11.094093   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:39:11.094118   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:11.094202   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:39:11.094356   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa Username:docker}
	I1107 23:39:11.094425   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:39:11.094561   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:39:11.094719   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:39:11.094847   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa Username:docker}
	I1107 23:39:11.345502   33391 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 23:39:11.345585   33391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:39:11.351331   33391 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1107 23:39:11.351627   33391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:39:11.351695   33391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:39:11.360413   33391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1107 23:39:11.360436   33391 start.go:472] detecting cgroup driver to use...
	I1107 23:39:11.360500   33391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:39:11.374233   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:39:11.386589   33391 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:39:11.386654   33391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:39:11.399524   33391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:39:11.412166   33391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:39:11.541739   33391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:39:11.671495   33391 docker.go:219] disabling docker service ...
	I1107 23:39:11.671567   33391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:39:11.688167   33391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:39:11.702632   33391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:39:11.849041   33391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:39:11.974036   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:39:11.986922   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:39:12.004105   33391 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1107 23:39:12.004524   33391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:39:12.004586   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:39:12.014266   33391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:39:12.014328   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:39:12.023645   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:39:12.032946   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:39:12.042303   33391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:39:12.052242   33391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:39:12.060448   33391 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1107 23:39:12.060498   33391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:39:12.068602   33391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:39:12.182543   33391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:39:12.405644   33391 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:39:12.405712   33391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:39:12.410724   33391 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1107 23:39:12.410751   33391 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 23:39:12.410762   33391 command_runner.go:130] > Device: 16h/22d	Inode: 1198        Links: 1
	I1107 23:39:12.410773   33391 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:39:12.410781   33391 command_runner.go:130] > Access: 2023-11-07 23:39:12.335035474 +0000
	I1107 23:39:12.410803   33391 command_runner.go:130] > Modify: 2023-11-07 23:39:12.335035474 +0000
	I1107 23:39:12.410821   33391 command_runner.go:130] > Change: 2023-11-07 23:39:12.335035474 +0000
	I1107 23:39:12.410830   33391 command_runner.go:130] >  Birth: -
	I1107 23:39:12.410853   33391 start.go:540] Will wait 60s for crictl version
	I1107 23:39:12.410901   33391 ssh_runner.go:195] Run: which crictl
	I1107 23:39:12.414611   33391 command_runner.go:130] > /usr/bin/crictl
	I1107 23:39:12.414888   33391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:39:12.452533   33391 command_runner.go:130] > Version:  0.1.0
	I1107 23:39:12.452559   33391 command_runner.go:130] > RuntimeName:  cri-o
	I1107 23:39:12.452567   33391 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1107 23:39:12.452576   33391 command_runner.go:130] > RuntimeApiVersion:  v1
	I1107 23:39:12.454362   33391 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1107 23:39:12.454432   33391 ssh_runner.go:195] Run: crio --version
	I1107 23:39:12.500732   33391 command_runner.go:130] > crio version 1.24.1
	I1107 23:39:12.500755   33391 command_runner.go:130] > Version:          1.24.1
	I1107 23:39:12.500762   33391 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:39:12.500766   33391 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:39:12.500772   33391 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:39:12.500776   33391 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:39:12.500781   33391 command_runner.go:130] > Compiler:         gc
	I1107 23:39:12.500785   33391 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:39:12.500791   33391 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:39:12.500798   33391 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:39:12.500803   33391 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:39:12.500807   33391 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:39:12.502111   33391 ssh_runner.go:195] Run: crio --version
	I1107 23:39:12.557667   33391 command_runner.go:130] > crio version 1.24.1
	I1107 23:39:12.557697   33391 command_runner.go:130] > Version:          1.24.1
	I1107 23:39:12.557707   33391 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:39:12.557715   33391 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:39:12.557723   33391 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:39:12.557730   33391 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:39:12.557737   33391 command_runner.go:130] > Compiler:         gc
	I1107 23:39:12.557745   33391 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:39:12.557755   33391 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:39:12.557767   33391 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:39:12.557777   33391 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:39:12.557788   33391 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:39:12.559885   33391 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1107 23:39:12.561468   33391 out.go:177]   - env NO_PROXY=192.168.39.246
	I1107 23:39:12.562872   33391 main.go:141] libmachine: (multinode-553062-m02) Calling .GetIP
	I1107 23:39:12.565687   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:12.566085   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:39:12.566121   33391 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:39:12.566364   33391 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:39:12.580501   33391 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1107 23:39:12.580747   33391 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062 for IP: 192.168.39.137
	I1107 23:39:12.580776   33391 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:39:12.580948   33391 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1107 23:39:12.580998   33391 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1107 23:39:12.581016   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:39:12.581039   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:39:12.581057   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:39:12.581076   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:39:12.581145   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1107 23:39:12.581190   33391 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1107 23:39:12.581206   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:39:12.581240   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:39:12.581276   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:39:12.581308   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1107 23:39:12.581364   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:39:12.581404   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /usr/share/ca-certificates/168482.pem
	I1107 23:39:12.581424   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:39:12.581443   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem -> /usr/share/ca-certificates/16848.pem
	I1107 23:39:12.581783   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:39:12.648021   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:39:12.682099   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:39:12.706676   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 23:39:12.727908   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1107 23:39:12.749706   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:39:12.776642   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1107 23:39:12.799818   33391 ssh_runner.go:195] Run: openssl version
	I1107 23:39:12.805808   33391 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1107 23:39:12.805872   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1107 23:39:12.816061   33391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1107 23:39:12.820519   33391 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:39:12.820718   33391 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:39:12.820763   33391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1107 23:39:12.825896   33391 command_runner.go:130] > 3ec20f2e
	I1107 23:39:12.826124   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:39:12.835278   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:39:12.846459   33391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:39:12.850625   33391 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:39:12.850862   33391 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:39:12.850917   33391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:39:12.856259   33391 command_runner.go:130] > b5213941
	I1107 23:39:12.856317   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:39:12.865095   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1107 23:39:12.877398   33391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1107 23:39:12.882008   33391 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:39:12.882316   33391 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:39:12.882371   33391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1107 23:39:12.887633   33391 command_runner.go:130] > 51391683
	I1107 23:39:12.887698   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1107 23:39:12.896535   33391 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:39:12.900298   33391 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:39:12.900374   33391 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:39:12.900456   33391 ssh_runner.go:195] Run: crio config
	I1107 23:39:12.954340   33391 command_runner.go:130] ! time="2023-11-07 23:39:12.948395091Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1107 23:39:12.954372   33391 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1107 23:39:12.960399   33391 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1107 23:39:12.960421   33391 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1107 23:39:12.960431   33391 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1107 23:39:12.960437   33391 command_runner.go:130] > #
	I1107 23:39:12.960447   33391 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1107 23:39:12.960457   33391 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1107 23:39:12.960470   33391 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1107 23:39:12.960486   33391 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1107 23:39:12.960496   33391 command_runner.go:130] > # reload'.
	I1107 23:39:12.960510   33391 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1107 23:39:12.960525   33391 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1107 23:39:12.960539   33391 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1107 23:39:12.960554   33391 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1107 23:39:12.960563   33391 command_runner.go:130] > [crio]
	I1107 23:39:12.960578   33391 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1107 23:39:12.960591   33391 command_runner.go:130] > # containers images, in this directory.
	I1107 23:39:12.960604   33391 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1107 23:39:12.960625   33391 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1107 23:39:12.960637   33391 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1107 23:39:12.960652   33391 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1107 23:39:12.960666   33391 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1107 23:39:12.960683   33391 command_runner.go:130] > storage_driver = "overlay"
	I1107 23:39:12.960696   33391 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1107 23:39:12.960710   33391 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1107 23:39:12.960721   33391 command_runner.go:130] > storage_option = [
	I1107 23:39:12.960731   33391 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1107 23:39:12.960739   33391 command_runner.go:130] > ]
	I1107 23:39:12.960751   33391 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1107 23:39:12.960765   33391 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1107 23:39:12.960776   33391 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1107 23:39:12.960790   33391 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1107 23:39:12.960804   33391 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1107 23:39:12.960831   33391 command_runner.go:130] > # always happen on a node reboot
	I1107 23:39:12.960844   33391 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1107 23:39:12.960854   33391 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1107 23:39:12.960868   33391 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1107 23:39:12.960885   33391 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1107 23:39:12.960897   33391 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1107 23:39:12.960914   33391 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1107 23:39:12.960935   33391 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1107 23:39:12.960945   33391 command_runner.go:130] > # internal_wipe = true
	I1107 23:39:12.960955   33391 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1107 23:39:12.960969   33391 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1107 23:39:12.960983   33391 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1107 23:39:12.960996   33391 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1107 23:39:12.961010   33391 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1107 23:39:12.961019   33391 command_runner.go:130] > [crio.api]
	I1107 23:39:12.961033   33391 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1107 23:39:12.961044   33391 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1107 23:39:12.961054   33391 command_runner.go:130] > # IP address on which the stream server will listen.
	I1107 23:39:12.961066   33391 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1107 23:39:12.961081   33391 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1107 23:39:12.961093   33391 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1107 23:39:12.961104   33391 command_runner.go:130] > # stream_port = "0"
	I1107 23:39:12.961117   33391 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1107 23:39:12.961132   33391 command_runner.go:130] > # stream_enable_tls = false
	I1107 23:39:12.961146   33391 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1107 23:39:12.961158   33391 command_runner.go:130] > # stream_idle_timeout = ""
	I1107 23:39:12.961173   33391 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1107 23:39:12.961188   33391 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1107 23:39:12.961198   33391 command_runner.go:130] > # minutes.
	I1107 23:39:12.961209   33391 command_runner.go:130] > # stream_tls_cert = ""
	I1107 23:39:12.961223   33391 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1107 23:39:12.961238   33391 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1107 23:39:12.961248   33391 command_runner.go:130] > # stream_tls_key = ""
	I1107 23:39:12.961262   33391 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1107 23:39:12.961277   33391 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1107 23:39:12.961290   33391 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1107 23:39:12.961301   33391 command_runner.go:130] > # stream_tls_ca = ""
	I1107 23:39:12.961315   33391 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:39:12.961325   33391 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1107 23:39:12.961339   33391 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:39:12.961351   33391 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1107 23:39:12.961399   33391 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1107 23:39:12.961414   33391 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1107 23:39:12.961421   33391 command_runner.go:130] > [crio.runtime]
	I1107 23:39:12.961431   33391 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1107 23:39:12.961445   33391 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1107 23:39:12.961455   33391 command_runner.go:130] > # "nofile=1024:2048"
	I1107 23:39:12.961470   33391 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1107 23:39:12.961480   33391 command_runner.go:130] > # default_ulimits = [
	I1107 23:39:12.961488   33391 command_runner.go:130] > # ]
	I1107 23:39:12.961499   33391 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1107 23:39:12.961510   33391 command_runner.go:130] > # no_pivot = false
	I1107 23:39:12.961523   33391 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1107 23:39:12.961534   33391 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1107 23:39:12.961546   33391 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1107 23:39:12.961560   33391 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1107 23:39:12.961572   33391 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1107 23:39:12.961588   33391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:39:12.961600   33391 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1107 23:39:12.961612   33391 command_runner.go:130] > # Cgroup setting for conmon
	I1107 23:39:12.961627   33391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1107 23:39:12.961636   33391 command_runner.go:130] > conmon_cgroup = "pod"
	I1107 23:39:12.961651   33391 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1107 23:39:12.961664   33391 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1107 23:39:12.961679   33391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:39:12.961689   33391 command_runner.go:130] > conmon_env = [
	I1107 23:39:12.961702   33391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1107 23:39:12.961711   33391 command_runner.go:130] > ]
	I1107 23:39:12.961721   33391 command_runner.go:130] > # Additional environment variables to set for all the
	I1107 23:39:12.961733   33391 command_runner.go:130] > # containers. These are overridden if set in the
	I1107 23:39:12.961744   33391 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1107 23:39:12.961754   33391 command_runner.go:130] > # default_env = [
	I1107 23:39:12.961761   33391 command_runner.go:130] > # ]
	I1107 23:39:12.961773   33391 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1107 23:39:12.961783   33391 command_runner.go:130] > # selinux = false
	I1107 23:39:12.961798   33391 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1107 23:39:12.961812   33391 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1107 23:39:12.961826   33391 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1107 23:39:12.961836   33391 command_runner.go:130] > # seccomp_profile = ""
	I1107 23:39:12.961851   33391 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1107 23:39:12.961865   33391 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1107 23:39:12.961880   33391 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1107 23:39:12.961891   33391 command_runner.go:130] > # which might increase security.
	I1107 23:39:12.961905   33391 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1107 23:39:12.961919   33391 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1107 23:39:12.961933   33391 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1107 23:39:12.961947   33391 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1107 23:39:12.961962   33391 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1107 23:39:12.961975   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:39:12.961986   33391 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1107 23:39:12.962000   33391 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1107 23:39:12.962010   33391 command_runner.go:130] > # the cgroup blockio controller.
	I1107 23:39:12.962018   33391 command_runner.go:130] > # blockio_config_file = ""
	I1107 23:39:12.962033   33391 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1107 23:39:12.962043   33391 command_runner.go:130] > # irqbalance daemon.
	I1107 23:39:12.962054   33391 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1107 23:39:12.962069   33391 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1107 23:39:12.962083   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:39:12.962094   33391 command_runner.go:130] > # rdt_config_file = ""
	I1107 23:39:12.962107   33391 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1107 23:39:12.962117   33391 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1107 23:39:12.962135   33391 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1107 23:39:12.962145   33391 command_runner.go:130] > # separate_pull_cgroup = ""
	I1107 23:39:12.962160   33391 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1107 23:39:12.962175   33391 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1107 23:39:12.962185   33391 command_runner.go:130] > # will be added.
	I1107 23:39:12.962196   33391 command_runner.go:130] > # default_capabilities = [
	I1107 23:39:12.962204   33391 command_runner.go:130] > # 	"CHOWN",
	I1107 23:39:12.962214   33391 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1107 23:39:12.962222   33391 command_runner.go:130] > # 	"FSETID",
	I1107 23:39:12.962232   33391 command_runner.go:130] > # 	"FOWNER",
	I1107 23:39:12.962241   33391 command_runner.go:130] > # 	"SETGID",
	I1107 23:39:12.962251   33391 command_runner.go:130] > # 	"SETUID",
	I1107 23:39:12.962261   33391 command_runner.go:130] > # 	"SETPCAP",
	I1107 23:39:12.962270   33391 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1107 23:39:12.962283   33391 command_runner.go:130] > # 	"KILL",
	I1107 23:39:12.962292   33391 command_runner.go:130] > # ]
	I1107 23:39:12.962304   33391 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1107 23:39:12.962318   33391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:39:12.962330   33391 command_runner.go:130] > # default_sysctls = [
	I1107 23:39:12.962340   33391 command_runner.go:130] > # ]
	I1107 23:39:12.962352   33391 command_runner.go:130] > # List of devices on the host that a
	I1107 23:39:12.962367   33391 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1107 23:39:12.962377   33391 command_runner.go:130] > # allowed_devices = [
	I1107 23:39:12.962387   33391 command_runner.go:130] > # 	"/dev/fuse",
	I1107 23:39:12.962395   33391 command_runner.go:130] > # ]
	I1107 23:39:12.962408   33391 command_runner.go:130] > # List of additional devices. specified as
	I1107 23:39:12.962424   33391 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1107 23:39:12.962436   33391 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1107 23:39:12.962480   33391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:39:12.962491   33391 command_runner.go:130] > # additional_devices = [
	I1107 23:39:12.962497   33391 command_runner.go:130] > # ]
	I1107 23:39:12.962506   33391 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1107 23:39:12.962517   33391 command_runner.go:130] > # cdi_spec_dirs = [
	I1107 23:39:12.962528   33391 command_runner.go:130] > # 	"/etc/cdi",
	I1107 23:39:12.962539   33391 command_runner.go:130] > # 	"/var/run/cdi",
	I1107 23:39:12.962546   33391 command_runner.go:130] > # ]
	I1107 23:39:12.962558   33391 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1107 23:39:12.962572   33391 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1107 23:39:12.962583   33391 command_runner.go:130] > # Defaults to false.
	I1107 23:39:12.962595   33391 command_runner.go:130] > # device_ownership_from_security_context = false
	I1107 23:39:12.962610   33391 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1107 23:39:12.962624   33391 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1107 23:39:12.962635   33391 command_runner.go:130] > # hooks_dir = [
	I1107 23:39:12.962647   33391 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1107 23:39:12.962657   33391 command_runner.go:130] > # ]
	I1107 23:39:12.962668   33391 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1107 23:39:12.962683   33391 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1107 23:39:12.962695   33391 command_runner.go:130] > # its default mounts from the following two files:
	I1107 23:39:12.962704   33391 command_runner.go:130] > #
	I1107 23:39:12.962715   33391 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1107 23:39:12.962729   33391 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1107 23:39:12.962742   33391 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1107 23:39:12.962751   33391 command_runner.go:130] > #
	I1107 23:39:12.962763   33391 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1107 23:39:12.962777   33391 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1107 23:39:12.962792   33391 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1107 23:39:12.962804   33391 command_runner.go:130] > #      only add mounts it finds in this file.
	I1107 23:39:12.962810   33391 command_runner.go:130] > #
	I1107 23:39:12.962822   33391 command_runner.go:130] > # default_mounts_file = ""
	I1107 23:39:12.962835   33391 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1107 23:39:12.962850   33391 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1107 23:39:12.962861   33391 command_runner.go:130] > pids_limit = 1024
	I1107 23:39:12.962875   33391 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1107 23:39:12.962889   33391 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1107 23:39:12.962904   33391 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1107 23:39:12.962921   33391 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1107 23:39:12.962932   33391 command_runner.go:130] > # log_size_max = -1
	I1107 23:39:12.962947   33391 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1107 23:39:12.962959   33391 command_runner.go:130] > # log_to_journald = false
	I1107 23:39:12.962973   33391 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1107 23:39:12.962985   33391 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1107 23:39:12.962997   33391 command_runner.go:130] > # Path to directory for container attach sockets.
	I1107 23:39:12.963007   33391 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1107 23:39:12.963020   33391 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1107 23:39:12.963032   33391 command_runner.go:130] > # bind_mount_prefix = ""
	I1107 23:39:12.963045   33391 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1107 23:39:12.963055   33391 command_runner.go:130] > # read_only = false
	I1107 23:39:12.963070   33391 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1107 23:39:12.963084   33391 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1107 23:39:12.963095   33391 command_runner.go:130] > # live configuration reload.
	I1107 23:39:12.963103   33391 command_runner.go:130] > # log_level = "info"
	I1107 23:39:12.963117   33391 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1107 23:39:12.963132   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:39:12.963143   33391 command_runner.go:130] > # log_filter = ""
	I1107 23:39:12.963157   33391 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1107 23:39:12.963171   33391 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1107 23:39:12.963182   33391 command_runner.go:130] > # separated by comma.
	I1107 23:39:12.963190   33391 command_runner.go:130] > # uid_mappings = ""
	I1107 23:39:12.963204   33391 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1107 23:39:12.963218   33391 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1107 23:39:12.963229   33391 command_runner.go:130] > # separated by comma.
	I1107 23:39:12.963240   33391 command_runner.go:130] > # gid_mappings = ""
	I1107 23:39:12.963255   33391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1107 23:39:12.963269   33391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:39:12.963284   33391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:39:12.963294   33391 command_runner.go:130] > # minimum_mappable_uid = -1
	I1107 23:39:12.963306   33391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1107 23:39:12.963321   33391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:39:12.963336   33391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:39:12.963347   33391 command_runner.go:130] > # minimum_mappable_gid = -1
	I1107 23:39:12.963361   33391 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1107 23:39:12.963375   33391 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1107 23:39:12.963388   33391 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1107 23:39:12.963398   33391 command_runner.go:130] > # ctr_stop_timeout = 30
	I1107 23:39:12.963410   33391 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1107 23:39:12.963423   33391 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1107 23:39:12.963436   33391 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1107 23:39:12.963449   33391 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1107 23:39:12.963463   33391 command_runner.go:130] > drop_infra_ctr = false
	I1107 23:39:12.963477   33391 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1107 23:39:12.963491   33391 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1107 23:39:12.963507   33391 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1107 23:39:12.963518   33391 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1107 23:39:12.963533   33391 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1107 23:39:12.963546   33391 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1107 23:39:12.963557   33391 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1107 23:39:12.963573   33391 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1107 23:39:12.963584   33391 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1107 23:39:12.963598   33391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1107 23:39:12.963613   33391 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1107 23:39:12.963628   33391 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1107 23:39:12.963637   33391 command_runner.go:130] > # default_runtime = "runc"
	I1107 23:39:12.963649   33391 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1107 23:39:12.963666   33391 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1107 23:39:12.963684   33391 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1107 23:39:12.963696   33391 command_runner.go:130] > # creation as a file is not desired either.
	I1107 23:39:12.963712   33391 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1107 23:39:12.963724   33391 command_runner.go:130] > # the hostname is being managed dynamically.
	I1107 23:39:12.963737   33391 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1107 23:39:12.963746   33391 command_runner.go:130] > # ]
	I1107 23:39:12.963759   33391 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1107 23:39:12.963774   33391 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1107 23:39:12.963789   33391 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1107 23:39:12.963803   33391 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1107 23:39:12.963812   33391 command_runner.go:130] > #
	I1107 23:39:12.963822   33391 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1107 23:39:12.963834   33391 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1107 23:39:12.963845   33391 command_runner.go:130] > #  runtime_type = "oci"
	I1107 23:39:12.963854   33391 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1107 23:39:12.963866   33391 command_runner.go:130] > #  privileged_without_host_devices = false
	I1107 23:39:12.963878   33391 command_runner.go:130] > #  allowed_annotations = []
	I1107 23:39:12.963888   33391 command_runner.go:130] > # Where:
	I1107 23:39:12.963899   33391 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1107 23:39:12.963914   33391 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1107 23:39:12.963929   33391 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1107 23:39:12.963943   33391 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1107 23:39:12.963953   33391 command_runner.go:130] > #   in $PATH.
	I1107 23:39:12.963967   33391 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1107 23:39:12.963981   33391 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1107 23:39:12.963995   33391 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1107 23:39:12.964005   33391 command_runner.go:130] > #   state.
	I1107 23:39:12.964020   33391 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1107 23:39:12.964034   33391 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1107 23:39:12.964049   33391 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1107 23:39:12.964062   33391 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1107 23:39:12.964076   33391 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1107 23:39:12.964090   33391 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1107 23:39:12.964102   33391 command_runner.go:130] > #   The currently recognized values are:
	I1107 23:39:12.964116   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1107 23:39:12.964135   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1107 23:39:12.964149   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1107 23:39:12.964163   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1107 23:39:12.964177   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1107 23:39:12.964191   33391 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1107 23:39:12.964206   33391 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1107 23:39:12.964221   33391 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1107 23:39:12.964233   33391 command_runner.go:130] > #   should be moved to the container's cgroup
	I1107 23:39:12.964244   33391 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1107 23:39:12.964253   33391 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1107 23:39:12.964264   33391 command_runner.go:130] > runtime_type = "oci"
	I1107 23:39:12.964276   33391 command_runner.go:130] > runtime_root = "/run/runc"
	I1107 23:39:12.964287   33391 command_runner.go:130] > runtime_config_path = ""
	I1107 23:39:12.964295   33391 command_runner.go:130] > monitor_path = ""
	I1107 23:39:12.964306   33391 command_runner.go:130] > monitor_cgroup = ""
	I1107 23:39:12.964317   33391 command_runner.go:130] > monitor_exec_cgroup = ""
	I1107 23:39:12.964331   33391 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1107 23:39:12.964343   33391 command_runner.go:130] > # running containers
	I1107 23:39:12.964352   33391 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1107 23:39:12.964366   33391 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1107 23:39:12.964419   33391 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1107 23:39:12.964432   33391 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1107 23:39:12.964442   33391 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1107 23:39:12.964454   33391 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1107 23:39:12.964465   33391 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1107 23:39:12.964476   33391 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1107 23:39:12.964489   33391 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1107 23:39:12.964501   33391 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1107 23:39:12.964515   33391 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1107 23:39:12.964529   33391 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1107 23:39:12.964543   33391 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1107 23:39:12.964560   33391 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1107 23:39:12.964577   33391 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1107 23:39:12.964591   33391 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1107 23:39:12.964610   33391 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1107 23:39:12.964626   33391 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1107 23:39:12.964638   33391 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1107 23:39:12.964653   33391 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1107 23:39:12.964663   33391 command_runner.go:130] > # Example:
	I1107 23:39:12.964676   33391 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1107 23:39:12.964688   33391 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1107 23:39:12.964701   33391 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1107 23:39:12.964714   33391 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1107 23:39:12.964722   33391 command_runner.go:130] > # cpuset = 0
	I1107 23:39:12.964732   33391 command_runner.go:130] > # cpushares = "0-1"
	I1107 23:39:12.964740   33391 command_runner.go:130] > # Where:
	I1107 23:39:12.964750   33391 command_runner.go:130] > # The workload name is workload-type.
	I1107 23:39:12.964765   33391 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1107 23:39:12.964779   33391 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1107 23:39:12.964792   33391 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1107 23:39:12.964809   33391 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1107 23:39:12.964833   33391 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1107 23:39:12.964842   33391 command_runner.go:130] > # 
	I1107 23:39:12.964855   33391 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1107 23:39:12.964864   33391 command_runner.go:130] > #
	I1107 23:39:12.964876   33391 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1107 23:39:12.964890   33391 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1107 23:39:12.964904   33391 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1107 23:39:12.964919   33391 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1107 23:39:12.964933   33391 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1107 23:39:12.964943   33391 command_runner.go:130] > [crio.image]
	I1107 23:39:12.964958   33391 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1107 23:39:12.964969   33391 command_runner.go:130] > # default_transport = "docker://"
	I1107 23:39:12.964981   33391 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1107 23:39:12.964995   33391 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:39:12.965006   33391 command_runner.go:130] > # global_auth_file = ""
	I1107 23:39:12.965016   33391 command_runner.go:130] > # The image used to instantiate infra containers.
	I1107 23:39:12.965028   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:39:12.965042   33391 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1107 23:39:12.965057   33391 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1107 23:39:12.965071   33391 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:39:12.965083   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:39:12.965094   33391 command_runner.go:130] > # pause_image_auth_file = ""
	I1107 23:39:12.965108   33391 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1107 23:39:12.965126   33391 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1107 23:39:12.965140   33391 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1107 23:39:12.965153   33391 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1107 23:39:12.965164   33391 command_runner.go:130] > # pause_command = "/pause"
	I1107 23:39:12.965176   33391 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1107 23:39:12.965190   33391 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1107 23:39:12.965205   33391 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1107 23:39:12.965219   33391 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1107 23:39:12.965231   33391 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1107 23:39:12.965239   33391 command_runner.go:130] > # signature_policy = ""
	I1107 23:39:12.965253   33391 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1107 23:39:12.965267   33391 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1107 23:39:12.965278   33391 command_runner.go:130] > # changing them here.
	I1107 23:39:12.965287   33391 command_runner.go:130] > # insecure_registries = [
	I1107 23:39:12.965296   33391 command_runner.go:130] > # ]
	I1107 23:39:12.965314   33391 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1107 23:39:12.965326   33391 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1107 23:39:12.965334   33391 command_runner.go:130] > # image_volumes = "mkdir"
	I1107 23:39:12.965347   33391 command_runner.go:130] > # Temporary directory to use for storing big files
	I1107 23:39:12.965359   33391 command_runner.go:130] > # big_files_temporary_dir = ""
	I1107 23:39:12.965373   33391 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1107 23:39:12.965383   33391 command_runner.go:130] > # CNI plugins.
	I1107 23:39:12.965393   33391 command_runner.go:130] > [crio.network]
	I1107 23:39:12.965405   33391 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1107 23:39:12.965417   33391 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1107 23:39:12.965426   33391 command_runner.go:130] > # cni_default_network = ""
	I1107 23:39:12.965441   33391 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1107 23:39:12.965453   33391 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1107 23:39:12.965466   33391 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1107 23:39:12.965477   33391 command_runner.go:130] > # plugin_dirs = [
	I1107 23:39:12.965485   33391 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1107 23:39:12.965494   33391 command_runner.go:130] > # ]
	I1107 23:39:12.965505   33391 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1107 23:39:12.965515   33391 command_runner.go:130] > [crio.metrics]
	I1107 23:39:12.965524   33391 command_runner.go:130] > # Globally enable or disable metrics support.
	I1107 23:39:12.965535   33391 command_runner.go:130] > enable_metrics = true
	I1107 23:39:12.965545   33391 command_runner.go:130] > # Specify enabled metrics collectors.
	I1107 23:39:12.965557   33391 command_runner.go:130] > # Per default all metrics are enabled.
	I1107 23:39:12.965572   33391 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1107 23:39:12.965586   33391 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1107 23:39:12.965600   33391 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1107 23:39:12.965610   33391 command_runner.go:130] > # metrics_collectors = [
	I1107 23:39:12.965618   33391 command_runner.go:130] > # 	"operations",
	I1107 23:39:12.965630   33391 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1107 23:39:12.965642   33391 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1107 23:39:12.965653   33391 command_runner.go:130] > # 	"operations_errors",
	I1107 23:39:12.965661   33391 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1107 23:39:12.965672   33391 command_runner.go:130] > # 	"image_pulls_by_name",
	I1107 23:39:12.965681   33391 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1107 23:39:12.965692   33391 command_runner.go:130] > # 	"image_pulls_failures",
	I1107 23:39:12.965703   33391 command_runner.go:130] > # 	"image_pulls_successes",
	I1107 23:39:12.965715   33391 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1107 23:39:12.965726   33391 command_runner.go:130] > # 	"image_layer_reuse",
	I1107 23:39:12.965738   33391 command_runner.go:130] > # 	"containers_oom_total",
	I1107 23:39:12.965749   33391 command_runner.go:130] > # 	"containers_oom",
	I1107 23:39:12.965759   33391 command_runner.go:130] > # 	"processes_defunct",
	I1107 23:39:12.965770   33391 command_runner.go:130] > # 	"operations_total",
	I1107 23:39:12.965778   33391 command_runner.go:130] > # 	"operations_latency_seconds",
	I1107 23:39:12.965791   33391 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1107 23:39:12.965802   33391 command_runner.go:130] > # 	"operations_errors_total",
	I1107 23:39:12.965813   33391 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1107 23:39:12.965825   33391 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1107 23:39:12.965836   33391 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1107 23:39:12.965848   33391 command_runner.go:130] > # 	"image_pulls_success_total",
	I1107 23:39:12.965858   33391 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1107 23:39:12.965866   33391 command_runner.go:130] > # 	"containers_oom_count_total",
	I1107 23:39:12.965875   33391 command_runner.go:130] > # ]
	I1107 23:39:12.965885   33391 command_runner.go:130] > # The port on which the metrics server will listen.
	I1107 23:39:12.965896   33391 command_runner.go:130] > # metrics_port = 9090
	I1107 23:39:12.965906   33391 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1107 23:39:12.965916   33391 command_runner.go:130] > # metrics_socket = ""
	I1107 23:39:12.965927   33391 command_runner.go:130] > # The certificate for the secure metrics server.
	I1107 23:39:12.965941   33391 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1107 23:39:12.965955   33391 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1107 23:39:12.965966   33391 command_runner.go:130] > # certificate on any modification event.
	I1107 23:39:12.965975   33391 command_runner.go:130] > # metrics_cert = ""
	I1107 23:39:12.965988   33391 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1107 23:39:12.966000   33391 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1107 23:39:12.966011   33391 command_runner.go:130] > # metrics_key = ""
	I1107 23:39:12.966022   33391 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1107 23:39:12.966032   33391 command_runner.go:130] > [crio.tracing]
	I1107 23:39:12.966045   33391 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1107 23:39:12.966057   33391 command_runner.go:130] > # enable_tracing = false
	I1107 23:39:12.966069   33391 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1107 23:39:12.966081   33391 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1107 23:39:12.966095   33391 command_runner.go:130] > # Number of samples to collect per million spans.
	I1107 23:39:12.966106   33391 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1107 23:39:12.966126   33391 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1107 23:39:12.966137   33391 command_runner.go:130] > [crio.stats]
	I1107 23:39:12.966151   33391 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1107 23:39:12.966164   33391 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1107 23:39:12.966175   33391 command_runner.go:130] > # stats_collection_period = 0
	I1107 23:39:12.966247   33391 cni.go:84] Creating CNI manager for ""
	I1107 23:39:12.966257   33391 cni.go:136] 3 nodes found, recommending kindnet
	I1107 23:39:12.966268   33391 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:39:12.966293   33391 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553062 NodeName:multinode-553062-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:39:12.966422   33391 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553062-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:39:12.966494   33391 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553062-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:39:12.966562   33391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:39:12.980708   33391 command_runner.go:130] > kubeadm
	I1107 23:39:12.980730   33391 command_runner.go:130] > kubectl
	I1107 23:39:12.980736   33391 command_runner.go:130] > kubelet
	I1107 23:39:12.981088   33391 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:39:12.981139   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1107 23:39:12.991749   33391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1107 23:39:13.009945   33391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:39:13.029730   33391 ssh_runner.go:195] Run: grep 192.168.39.246	control-plane.minikube.internal$ /etc/hosts
	I1107 23:39:13.033503   33391 command_runner.go:130] > 192.168.39.246	control-plane.minikube.internal
	I1107 23:39:13.033555   33391 host.go:66] Checking if "multinode-553062" exists ...
	I1107 23:39:13.033834   33391 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:39:13.033951   33391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:39:13.033980   33391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:39:13.048516   33391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36051
	I1107 23:39:13.048896   33391 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:39:13.049307   33391 main.go:141] libmachine: Using API Version  1
	I1107 23:39:13.049328   33391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:39:13.049610   33391 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:39:13.049788   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:39:13.049950   33391 start.go:304] JoinCluster: &{Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.201 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:39:13.050088   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1107 23:39:13.050107   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:39:13.052760   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:39:13.053151   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:39:13.053191   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:39:13.053298   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:39:13.053455   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:39:13.053604   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:39:13.053755   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:39:13.248940   33391 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zs4sm2.vy3yf3mnmgfs2zlc --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1107 23:39:13.251919   33391 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:39:13.251957   33391 host.go:66] Checking if "multinode-553062" exists ...
	I1107 23:39:13.252276   33391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:39:13.252308   33391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:39:13.266574   33391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
	I1107 23:39:13.267072   33391 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:39:13.267646   33391 main.go:141] libmachine: Using API Version  1
	I1107 23:39:13.267675   33391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:39:13.268047   33391 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:39:13.268249   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:39:13.268474   33391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-553062-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1107 23:39:13.268502   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:39:13.271600   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:39:13.272182   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:39:13.272213   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:39:13.272385   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:39:13.272589   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:39:13.272760   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:39:13.272909   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:39:13.473243   33391 command_runner.go:130] > node/multinode-553062-m02 cordoned
	I1107 23:39:16.513913   33391 command_runner.go:130] > pod "busybox-5bc68d56bd-z67r2" has DeletionTimestamp older than 1 seconds, skipping
	I1107 23:39:16.513961   33391 command_runner.go:130] > node/multinode-553062-m02 drained
	I1107 23:39:16.515449   33391 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1107 23:39:16.515473   33391 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-4v85d, kube-system/kube-proxy-rktlk
	I1107 23:39:16.515504   33391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-553062-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.246998395s)
	I1107 23:39:16.515522   33391 node.go:108] successfully drained node "m02"
	I1107 23:39:16.515862   33391 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:39:16.516055   33391 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:39:16.516367   33391 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1107 23:39:16.516427   33391 round_trippers.go:463] DELETE https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:39:16.516439   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:16.516451   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:16.516461   33391 round_trippers.go:473]     Content-Type: application/json
	I1107 23:39:16.516469   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:16.527448   33391 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1107 23:39:16.527465   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:16.527472   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:16.527478   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:16.527483   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:16.527489   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:16.527501   33391 round_trippers.go:580]     Content-Length: 171
	I1107 23:39:16.527509   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:16 GMT
	I1107 23:39:16.527514   33391 round_trippers.go:580]     Audit-Id: b0b78620-e59c-450b-8947-8112fbb69423
	I1107 23:39:16.527978   33391 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-553062-m02","kind":"nodes","uid":"53135fdd-bf09-4482-8469-d918d3e75ee3"}}
	I1107 23:39:16.528036   33391 node.go:124] successfully deleted node "m02"
	I1107 23:39:16.528049   33391 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:39:16.528073   33391 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:39:16.528108   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zs4sm2.vy3yf3mnmgfs2zlc --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553062-m02"
	I1107 23:39:16.578431   33391 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 23:39:16.733991   33391 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 23:39:16.734017   33391 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 23:39:16.835603   33391 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:39:16.835710   33391 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:39:16.836096   33391 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 23:39:17.083456   33391 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1107 23:39:17.605610   33391 command_runner.go:130] > This node has joined the cluster:
	I1107 23:39:17.605640   33391 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1107 23:39:17.605649   33391 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1107 23:39:17.605660   33391 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1107 23:39:17.608162   33391 command_runner.go:130] ! W1107 23:39:16.572353    2595 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1107 23:39:17.608194   33391 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 23:39:17.608206   33391 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 23:39:17.608219   33391 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 23:39:17.608243   33391 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zs4sm2.vy3yf3mnmgfs2zlc --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553062-m02": (1.080118271s)
	I1107 23:39:17.608269   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1107 23:39:17.895141   33391 start.go:306] JoinCluster complete in 4.845185307s
	I1107 23:39:17.895172   33391 cni.go:84] Creating CNI manager for ""
	I1107 23:39:17.895182   33391 cni.go:136] 3 nodes found, recommending kindnet
	I1107 23:39:17.895228   33391 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:39:17.900911   33391 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 23:39:17.900935   33391 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1107 23:39:17.900945   33391 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1107 23:39:17.900955   33391 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:39:17.900966   33391 command_runner.go:130] > Access: 2023-11-07 23:36:53.922905698 +0000
	I1107 23:39:17.900974   33391 command_runner.go:130] > Modify: 2023-11-07 07:42:50.000000000 +0000
	I1107 23:39:17.900979   33391 command_runner.go:130] > Change: 2023-11-07 23:36:52.115905698 +0000
	I1107 23:39:17.900986   33391 command_runner.go:130] >  Birth: -
	I1107 23:39:17.901187   33391 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:39:17.901203   33391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:39:17.919942   33391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:39:18.268775   33391 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:39:18.275783   33391 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:39:18.280955   33391 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1107 23:39:18.301778   33391 command_runner.go:130] > daemonset.apps/kindnet configured
	I1107 23:39:18.304643   33391 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:39:18.304969   33391 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:39:18.305348   33391 round_trippers.go:463] GET https://192.168.39.246:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:39:18.305369   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.305380   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.305389   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.313951   33391 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1107 23:39:18.313972   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.313982   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.313992   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.314006   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.314015   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.314026   33391 round_trippers.go:580]     Content-Length: 291
	I1107 23:39:18.314036   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.314044   33391 round_trippers.go:580]     Audit-Id: 2b3c2b53-b7d0-4f72-95b3-739c7558eec6
	I1107 23:39:18.314067   33391 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"99a4298f-5274-4bac-956d-86f8091a0b82","resourceVersion":"859","creationTimestamp":"2023-11-07T23:26:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1107 23:39:18.314163   33391 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553062" context rescaled to 1 replicas
	I1107 23:39:18.314194   33391 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:39:18.315990   33391 out.go:177] * Verifying Kubernetes components...
	I1107 23:39:18.317488   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:39:18.332764   33391 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:39:18.333106   33391 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:39:18.333415   33391 node_ready.go:35] waiting up to 6m0s for node "multinode-553062-m02" to be "Ready" ...
	I1107 23:39:18.333498   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:39:18.333510   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.333521   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.333533   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.341422   33391 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1107 23:39:18.341441   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.341449   33391 round_trippers.go:580]     Audit-Id: f0ba28cb-0adb-4738-be7b-7e32795370ec
	I1107 23:39:18.341454   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.341459   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.341465   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.341477   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.341491   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.341811   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"4d60d501-112e-48fa-9d2b-2a6a7823e694","resourceVersion":"1011","creationTimestamp":"2023-11-07T23:39:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:39:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:39:17Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1107 23:39:18.342062   33391 node_ready.go:49] node "multinode-553062-m02" has status "Ready":"True"
	I1107 23:39:18.342077   33391 node_ready.go:38] duration metric: took 8.640001ms waiting for node "multinode-553062-m02" to be "Ready" ...
	I1107 23:39:18.342086   33391 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:39:18.342162   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:39:18.342173   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.342183   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.342197   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.347367   33391 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 23:39:18.347388   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.347398   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.347405   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.347412   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.347425   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.347433   33391 round_trippers.go:580]     Audit-Id: 6378e4f6-ac72-4be7-b868-5392d0a2a964
	I1107 23:39:18.347446   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.349450   33391 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1019"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"848","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82242 chars]
	I1107 23:39:18.351970   33391 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.352031   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:39:18.352039   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.352046   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.352052   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.354643   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:18.354662   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.354671   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.354680   33391 round_trippers.go:580]     Audit-Id: cc5b8c9d-ac38-4b3e-b548-64539e2acc65
	I1107 23:39:18.354693   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.354704   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.354715   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.354723   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.354880   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"848","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1107 23:39:18.355407   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:39:18.355425   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.355435   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.355450   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.357292   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:39:18.357306   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.357314   33391 round_trippers.go:580]     Audit-Id: fb0c2a32-0169-4302-9627-985150286187
	I1107 23:39:18.357321   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.357329   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.357337   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.357344   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.357353   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.357700   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:39:18.357977   33391 pod_ready.go:92] pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace has status "Ready":"True"
	I1107 23:39:18.357991   33391 pod_ready.go:81] duration metric: took 6.00123ms waiting for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.358007   33391 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.358045   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553062
	I1107 23:39:18.358053   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.358061   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.358067   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.360100   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:18.360112   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.360118   33391 round_trippers.go:580]     Audit-Id: 9da73eb1-740a-47a5-a4a7-b686ad732d27
	I1107 23:39:18.360123   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.360130   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.360138   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.360154   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.360163   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.360344   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553062","namespace":"kube-system","uid":"3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1","resourceVersion":"839","creationTimestamp":"2023-11-07T23:26:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.246:2379","kubernetes.io/config.hash":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.mirror":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.seen":"2023-11-07T23:26:48.362630200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1107 23:39:18.360663   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:39:18.360678   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.360688   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.360698   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.362739   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:18.362757   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.362765   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.362773   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.362781   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.362793   33391 round_trippers.go:580]     Audit-Id: bf442107-2421-4aa4-a0c7-fb2c94591337
	I1107 23:39:18.362802   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.362813   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.362933   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:39:18.363300   33391 pod_ready.go:92] pod "etcd-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:39:18.363321   33391 pod_ready.go:81] duration metric: took 5.307019ms waiting for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.363342   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.363396   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553062
	I1107 23:39:18.363409   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.363419   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.363432   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.365243   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:39:18.365257   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.365262   33391 round_trippers.go:580]     Audit-Id: 84f8286c-0d6c-4a0d-951b-e3a91bff6e9b
	I1107 23:39:18.365268   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.365276   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.365286   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.365294   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.365302   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.365579   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553062","namespace":"kube-system","uid":"30896fa0-3d8f-4861-bdf5-ad94796ad097","resourceVersion":"841","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.246:8443","kubernetes.io/config.hash":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.mirror":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.seen":"2023-11-07T23:26:57.103263110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1107 23:39:18.365922   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:39:18.365933   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.365940   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.365949   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.368346   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:18.368364   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.368373   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.368384   33391 round_trippers.go:580]     Audit-Id: 55e7570d-ca9a-4472-b07f-5d7bf8bcda31
	I1107 23:39:18.368396   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.368404   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.368415   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.368427   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.369232   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:39:18.369498   33391 pod_ready.go:92] pod "kube-apiserver-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:39:18.369512   33391 pod_ready.go:81] duration metric: took 6.158038ms waiting for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.369522   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.369564   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553062
	I1107 23:39:18.369573   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.369583   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.369600   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.372053   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:18.372071   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.372080   33391 round_trippers.go:580]     Audit-Id: b6d57368-1a60-4c5c-b796-a2dd25c81e22
	I1107 23:39:18.372088   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.372096   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.372109   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.372126   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.372138   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.372856   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553062","namespace":"kube-system","uid":"5a895945-b908-44ba-a1c8-93245f6a93f5","resourceVersion":"842","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.mirror":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.seen":"2023-11-07T23:26:57.103264314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1107 23:39:18.373226   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:39:18.373238   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.373245   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.373251   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.376098   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:18.376114   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.376123   33391 round_trippers.go:580]     Audit-Id: 84840181-7180-457b-b4b0-9fea218388b8
	I1107 23:39:18.376131   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.376140   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.376148   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.376158   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.376167   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.376305   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:39:18.376614   33391 pod_ready.go:92] pod "kube-controller-manager-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:39:18.376631   33391 pod_ready.go:81] duration metric: took 7.101219ms waiting for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.376642   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.533983   33391 request.go:629] Waited for 157.295212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:39:18.534054   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:39:18.534060   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.534067   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.534074   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.536995   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:18.537018   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.537029   33391 round_trippers.go:580]     Audit-Id: 9daa4701-fae8-4057-8715-3fb75a45506f
	I1107 23:39:18.537037   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.537044   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.537051   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.537058   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.537069   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.537230   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-944rz","generateName":"kube-proxy-","namespace":"kube-system","uid":"db20b1cf-b422-4649-a6e1-4549c4c56f33","resourceVersion":"772","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1107 23:39:18.734010   33391 request.go:629] Waited for 196.275143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:39:18.734102   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:39:18.734114   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.734125   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.734137   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.739221   33391 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 23:39:18.739245   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.739255   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.739263   33391 round_trippers.go:580]     Audit-Id: 4cf9998f-1cbf-454b-959b-f20b74bc7a03
	I1107 23:39:18.739272   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.739279   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.739292   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.739305   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.740413   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:39:18.740792   33391 pod_ready.go:92] pod "kube-proxy-944rz" in "kube-system" namespace has status "Ready":"True"
	I1107 23:39:18.740821   33391 pod_ready.go:81] duration metric: took 364.161532ms waiting for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.740838   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:18.934251   33391 request.go:629] Waited for 193.34934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:39:18.934329   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:39:18.934345   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:18.934357   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:18.934365   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:18.938089   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:39:18.938110   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:18.938126   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:18 GMT
	I1107 23:39:18.938135   33391 round_trippers.go:580]     Audit-Id: 8758aebd-5451-4670-b40f-3b2a8b6cbae6
	I1107 23:39:18.938159   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:18.938168   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:18.938177   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:18.938187   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:18.938328   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rktlk","generateName":"kube-proxy-","namespace":"kube-system","uid":"92ea69ee-cd72-4594-a338-9837cc44e5a1","resourceVersion":"984","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5885 chars]
	I1107 23:39:19.134017   33391 request.go:629] Waited for 195.185162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:39:19.134074   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:39:19.134079   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:19.134087   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:19.134095   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:19.137487   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:39:19.137508   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:19.137515   33391 round_trippers.go:580]     Audit-Id: e539d272-0020-4f97-a4d3-f47ae5c7dcfa
	I1107 23:39:19.137521   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:19.137530   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:19.137535   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:19.137542   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:19.137547   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:19 GMT
	I1107 23:39:19.137968   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"4d60d501-112e-48fa-9d2b-2a6a7823e694","resourceVersion":"1011","creationTimestamp":"2023-11-07T23:39:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:39:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:39:17Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1107 23:39:19.333602   33391 request.go:629] Waited for 195.269191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:39:19.333669   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:39:19.333674   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:19.333682   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:19.333689   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:19.347947   33391 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1107 23:39:19.347972   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:19.347982   33391 round_trippers.go:580]     Audit-Id: f44d123a-d826-412e-aeb9-545ab191ea5d
	I1107 23:39:19.347989   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:19.347996   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:19.348004   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:19.348011   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:19.348019   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:19 GMT
	I1107 23:39:19.348163   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rktlk","generateName":"kube-proxy-","namespace":"kube-system","uid":"92ea69ee-cd72-4594-a338-9837cc44e5a1","resourceVersion":"1030","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5730 chars]
	I1107 23:39:19.533908   33391 request.go:629] Waited for 185.336806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:39:19.533967   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:39:19.533972   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:19.533980   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:19.533985   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:19.536723   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:19.536737   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:19.536744   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:19.536749   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:19.536754   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:19 GMT
	I1107 23:39:19.536760   33391 round_trippers.go:580]     Audit-Id: b1884df0-cc00-400d-be3b-9d0cb69fe7ad
	I1107 23:39:19.536768   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:19.536791   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:19.537153   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"4d60d501-112e-48fa-9d2b-2a6a7823e694","resourceVersion":"1011","creationTimestamp":"2023-11-07T23:39:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:39:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:39:17Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1107 23:39:19.537385   33391 pod_ready.go:92] pod "kube-proxy-rktlk" in "kube-system" namespace has status "Ready":"True"
	I1107 23:39:19.537398   33391 pod_ready.go:81] duration metric: took 796.553375ms waiting for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:19.537412   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xwp5j" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:19.733741   33391 request.go:629] Waited for 196.270028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwp5j
	I1107 23:39:19.733819   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwp5j
	I1107 23:39:19.733831   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:19.733842   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:19.733850   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:19.736350   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:19.736362   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:19.736368   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:19.736373   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:19 GMT
	I1107 23:39:19.736378   33391 round_trippers.go:580]     Audit-Id: bccae101-fb71-4a54-bdb8-5ebf817e686e
	I1107 23:39:19.736392   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:19.736401   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:19.736409   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:19.737000   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xwp5j","generateName":"kube-proxy-","namespace":"kube-system","uid":"0347e6b5-3070-4b6a-ae2b-d1ac56a385cd","resourceVersion":"691","creationTimestamp":"2023-11-07T23:28:45Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:28:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5525 chars]
	I1107 23:39:19.933676   33391 request.go:629] Waited for 196.268176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:39:19.933732   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:39:19.933737   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:19.933744   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:19.933750   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:19.936253   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:39:19.936270   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:19.936278   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:19.936287   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:19.936298   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:19 GMT
	I1107 23:39:19.936308   33391 round_trippers.go:580]     Audit-Id: d8b07738-8717-4a83-baf6-fa60b44dcd49
	I1107 23:39:19.936320   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:19.936329   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:19.936791   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m03","uid":"c69b0e89-b34f-4710-b818-78e5076041aa","resourceVersion":"714","creationTimestamp":"2023-11-07T23:29:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:29:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1107 23:39:19.937096   33391 pod_ready.go:92] pod "kube-proxy-xwp5j" in "kube-system" namespace has status "Ready":"True"
	I1107 23:39:19.937112   33391 pod_ready.go:81] duration metric: took 399.694976ms waiting for pod "kube-proxy-xwp5j" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:19.937121   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:20.134526   33391 request.go:629] Waited for 197.352062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:39:20.134582   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:39:20.134588   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:20.134595   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:20.134627   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:20.137650   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:39:20.137674   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:20.137684   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:20.137692   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:20 GMT
	I1107 23:39:20.137700   33391 round_trippers.go:580]     Audit-Id: bbec9179-e09d-4776-b627-f43c1745497d
	I1107 23:39:20.137708   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:20.137716   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:20.137723   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:20.137879   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"870","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1107 23:39:20.334573   33391 request.go:629] Waited for 196.370043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:39:20.334636   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:39:20.334641   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:20.334649   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:20.334660   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:20.338277   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:39:20.338294   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:20.338302   33391 round_trippers.go:580]     Audit-Id: 59970486-7aff-496f-83b5-ebf9dbc058a1
	I1107 23:39:20.338307   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:20.338312   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:20.338317   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:20.338322   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:20.338327   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:20 GMT
	I1107 23:39:20.338557   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:39:20.338856   33391 pod_ready.go:92] pod "kube-scheduler-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:39:20.338870   33391 pod_ready.go:81] duration metric: took 401.742145ms waiting for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:39:20.338881   33391 pod_ready.go:38] duration metric: took 1.996770022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:39:20.338892   33391 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:39:20.338935   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:39:20.351799   33391 system_svc.go:56] duration metric: took 12.897624ms WaitForService to wait for kubelet.
	I1107 23:39:20.351823   33391 kubeadm.go:581] duration metric: took 2.037603435s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:39:20.351844   33391 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:39:20.534306   33391 request.go:629] Waited for 182.375918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1107 23:39:20.534368   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1107 23:39:20.534374   33391 round_trippers.go:469] Request Headers:
	I1107 23:39:20.534385   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:39:20.534395   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:39:20.537421   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:39:20.537438   33391 round_trippers.go:577] Response Headers:
	I1107 23:39:20.537444   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:39:20.537450   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:39:20 GMT
	I1107 23:39:20.537455   33391 round_trippers.go:580]     Audit-Id: 10ffef2a-dc1a-4e23-bdcb-52d1a2a3d91a
	I1107 23:39:20.537460   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:39:20.537465   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:39:20.537470   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:39:20.537950   33391 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1035"},"items":[{"metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15106 chars]
	I1107 23:39:20.538478   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:39:20.538496   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:39:20.538504   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:39:20.538508   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:39:20.538511   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:39:20.538514   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:39:20.538518   33391 node_conditions.go:105] duration metric: took 186.668773ms to run NodePressure ...
	I1107 23:39:20.538526   33391 start.go:228] waiting for startup goroutines ...
	I1107 23:39:20.538544   33391 start.go:242] writing updated cluster config ...
	I1107 23:39:20.538929   33391 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:39:20.539009   33391 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:39:20.542270   33391 out.go:177] * Starting worker node multinode-553062-m03 in cluster multinode-553062
	I1107 23:39:20.543577   33391 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:39:20.543597   33391 cache.go:56] Caching tarball of preloaded images
	I1107 23:39:20.543672   33391 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:39:20.543683   33391 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:39:20.543766   33391 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/config.json ...
	I1107 23:39:20.543914   33391 start.go:365] acquiring machines lock for multinode-553062-m03: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:39:20.543952   33391 start.go:369] acquired machines lock for "multinode-553062-m03" in 21.296µs
	I1107 23:39:20.543964   33391 start.go:96] Skipping create...Using existing machine configuration
	I1107 23:39:20.543970   33391 fix.go:54] fixHost starting: m03
	I1107 23:39:20.544208   33391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:39:20.544227   33391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:39:20.558112   33391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I1107 23:39:20.558502   33391 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:39:20.558911   33391 main.go:141] libmachine: Using API Version  1
	I1107 23:39:20.558934   33391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:39:20.559246   33391 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:39:20.559412   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .DriverName
	I1107 23:39:20.559571   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetState
	I1107 23:39:20.561055   33391 fix.go:102] recreateIfNeeded on multinode-553062-m03: state=Running err=<nil>
	W1107 23:39:20.561070   33391 fix.go:128] unexpected machine state, will restart: <nil>
	I1107 23:39:20.563032   33391 out.go:177] * Updating the running kvm2 "multinode-553062-m03" VM ...
	I1107 23:39:20.564367   33391 machine.go:88] provisioning docker machine ...
	I1107 23:39:20.564385   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .DriverName
	I1107 23:39:20.564608   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetMachineName
	I1107 23:39:20.564750   33391 buildroot.go:166] provisioning hostname "multinode-553062-m03"
	I1107 23:39:20.564764   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetMachineName
	I1107 23:39:20.564879   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHHostname
	I1107 23:39:20.567057   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.567477   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:39:20.567515   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.567635   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHPort
	I1107 23:39:20.567785   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:39:20.567930   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:39:20.568032   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHUsername
	I1107 23:39:20.568194   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:39:20.568556   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1107 23:39:20.568573   33391 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553062-m03 && echo "multinode-553062-m03" | sudo tee /etc/hostname
	I1107 23:39:20.714494   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553062-m03
	
	I1107 23:39:20.714525   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHHostname
	I1107 23:39:20.717359   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.717722   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:39:20.717751   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.717978   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHPort
	I1107 23:39:20.718149   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:39:20.718307   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:39:20.718412   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHUsername
	I1107 23:39:20.718544   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:39:20.718997   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1107 23:39:20.719019   33391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553062-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553062-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553062-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:39:20.849698   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:39:20.849725   33391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1107 23:39:20.849739   33391 buildroot.go:174] setting up certificates
	I1107 23:39:20.849747   33391 provision.go:83] configureAuth start
	I1107 23:39:20.849755   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetMachineName
	I1107 23:39:20.850065   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetIP
	I1107 23:39:20.852747   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.853120   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:39:20.853140   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.853287   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHHostname
	I1107 23:39:20.855222   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.855588   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:39:20.855613   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.855776   33391 provision.go:138] copyHostCerts
	I1107 23:39:20.855804   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:39:20.855838   33391 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1107 23:39:20.855905   33391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:39:20.856011   33391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1107 23:39:20.856133   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:39:20.856161   33391 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1107 23:39:20.856174   33391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:39:20.856214   33391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1107 23:39:20.856276   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:39:20.856298   33391 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1107 23:39:20.856308   33391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:39:20.856341   33391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1107 23:39:20.856402   33391 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.multinode-553062-m03 san=[192.168.39.201 192.168.39.201 localhost 127.0.0.1 minikube multinode-553062-m03]
	I1107 23:39:20.971267   33391 provision.go:172] copyRemoteCerts
	I1107 23:39:20.971319   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:39:20.971356   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHHostname
	I1107 23:39:20.974064   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.974493   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:39:20.974533   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:20.974705   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHPort
	I1107 23:39:20.974885   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:39:20.975058   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHUsername
	I1107 23:39:20.975220   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m03/id_rsa Username:docker}
	I1107 23:39:21.069446   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:39:21.069522   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:39:21.096431   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:39:21.096485   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1107 23:39:21.122416   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:39:21.122474   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:39:21.148597   33391 provision.go:86] duration metric: configureAuth took 298.839603ms
	I1107 23:39:21.148622   33391 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:39:21.148905   33391 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:39:21.148984   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHHostname
	I1107 23:39:21.151500   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:21.151861   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:39:21.151884   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:39:21.152041   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHPort
	I1107 23:39:21.152247   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:39:21.152393   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:39:21.152549   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHUsername
	I1107 23:39:21.152713   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:39:21.153091   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1107 23:39:21.153113   33391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:40:51.716833   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:40:51.716865   33391 machine.go:91] provisioned docker machine in 1m31.152482311s
	I1107 23:40:51.716879   33391 start.go:300] post-start starting for "multinode-553062-m03" (driver="kvm2")
	I1107 23:40:51.716890   33391 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:40:51.716906   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .DriverName
	I1107 23:40:51.717238   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:40:51.717267   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHHostname
	I1107 23:40:51.720215   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.720637   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:40:51.720666   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.720896   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHPort
	I1107 23:40:51.721096   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:40:51.721267   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHUsername
	I1107 23:40:51.721406   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m03/id_rsa Username:docker}
	I1107 23:40:51.816173   33391 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:40:51.820452   33391 command_runner.go:130] > NAME=Buildroot
	I1107 23:40:51.820476   33391 command_runner.go:130] > VERSION=2021.02.12-1-gb75713b-dirty
	I1107 23:40:51.820483   33391 command_runner.go:130] > ID=buildroot
	I1107 23:40:51.820491   33391 command_runner.go:130] > VERSION_ID=2021.02.12
	I1107 23:40:51.820499   33391 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1107 23:40:51.820586   33391 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:40:51.820612   33391 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1107 23:40:51.820689   33391 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1107 23:40:51.820780   33391 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1107 23:40:51.820793   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /etc/ssl/certs/168482.pem
	I1107 23:40:51.820912   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:40:51.829086   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:40:51.852517   33391 start.go:303] post-start completed in 135.625329ms
	I1107 23:40:51.852540   33391 fix.go:56] fixHost completed within 1m31.308569308s
	I1107 23:40:51.852562   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHHostname
	I1107 23:40:51.854915   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.855300   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:40:51.855334   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.855474   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHPort
	I1107 23:40:51.855670   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:40:51.855830   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:40:51.855999   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHUsername
	I1107 23:40:51.856168   33391 main.go:141] libmachine: Using SSH client type: native
	I1107 23:40:51.856617   33391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I1107 23:40:51.856631   33391 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:40:51.981437   33391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699400451.975877275
	
	I1107 23:40:51.981462   33391 fix.go:206] guest clock: 1699400451.975877275
	I1107 23:40:51.981472   33391 fix.go:219] Guest: 2023-11-07 23:40:51.975877275 +0000 UTC Remote: 2023-11-07 23:40:51.852543741 +0000 UTC m=+548.964026401 (delta=123.333534ms)
	I1107 23:40:51.981493   33391 fix.go:190] guest clock delta is within tolerance: 123.333534ms
	I1107 23:40:51.981500   33391 start.go:83] releasing machines lock for "multinode-553062-m03", held for 1m31.437538864s
	I1107 23:40:51.981531   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .DriverName
	I1107 23:40:51.981785   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetIP
	I1107 23:40:51.984346   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.984724   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:40:51.984752   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.986959   33391 out.go:177] * Found network options:
	I1107 23:40:51.988482   33391 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.137
	W1107 23:40:51.989726   33391 proxy.go:119] fail to check proxy env: Error ip not in block
	W1107 23:40:51.989759   33391 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 23:40:51.989775   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .DriverName
	I1107 23:40:51.990294   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .DriverName
	I1107 23:40:51.990470   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .DriverName
	I1107 23:40:51.990555   33391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:40:51.990604   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHHostname
	W1107 23:40:51.990613   33391 proxy.go:119] fail to check proxy env: Error ip not in block
	W1107 23:40:51.990630   33391 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 23:40:51.990695   33391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:40:51.990710   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHHostname
	I1107 23:40:51.993720   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.993749   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.994209   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:40:51.994238   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.994270   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:40:51.994295   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:51.994373   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHPort
	I1107 23:40:51.994531   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHPort
	I1107 23:40:51.994559   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:40:51.994661   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHKeyPath
	I1107 23:40:51.994692   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHUsername
	I1107 23:40:51.994860   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetSSHUsername
	I1107 23:40:51.994861   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m03/id_rsa Username:docker}
	I1107 23:40:51.995005   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m03/id_rsa Username:docker}
	I1107 23:40:52.114167   33391 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 23:40:52.232504   33391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:40:52.238958   33391 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1107 23:40:52.239052   33391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:40:52.239102   33391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:40:52.249524   33391 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1107 23:40:52.249549   33391 start.go:472] detecting cgroup driver to use...
	I1107 23:40:52.249606   33391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:40:52.265992   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:40:52.280484   33391 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:40:52.280527   33391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:40:52.298189   33391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:40:52.313858   33391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:40:52.469110   33391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:40:52.587014   33391 docker.go:219] disabling docker service ...
	I1107 23:40:52.587081   33391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:40:52.600830   33391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:40:52.613305   33391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:40:52.731958   33391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:40:52.854984   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:40:52.868832   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:40:52.886274   33391 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1107 23:40:52.886671   33391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:40:52.886714   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:40:52.897105   33391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:40:52.897165   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:40:52.907600   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:40:52.919422   33391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:40:52.930482   33391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:40:52.942286   33391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:40:52.952621   33391 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1107 23:40:52.952728   33391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:40:52.962077   33391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:40:53.090532   33391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:40:53.316176   33391 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:40:53.316253   33391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:40:53.322017   33391 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1107 23:40:53.322044   33391 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 23:40:53.322055   33391 command_runner.go:130] > Device: 16h/22d	Inode: 1258        Links: 1
	I1107 23:40:53.322065   33391 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:40:53.322073   33391 command_runner.go:130] > Access: 2023-11-07 23:40:53.241001860 +0000
	I1107 23:40:53.322086   33391 command_runner.go:130] > Modify: 2023-11-07 23:40:53.241001860 +0000
	I1107 23:40:53.322097   33391 command_runner.go:130] > Change: 2023-11-07 23:40:53.241001860 +0000
	I1107 23:40:53.322104   33391 command_runner.go:130] >  Birth: -
	I1107 23:40:53.322121   33391 start.go:540] Will wait 60s for crictl version
	I1107 23:40:53.322175   33391 ssh_runner.go:195] Run: which crictl
	I1107 23:40:53.326026   33391 command_runner.go:130] > /usr/bin/crictl
	I1107 23:40:53.326369   33391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:40:53.373873   33391 command_runner.go:130] > Version:  0.1.0
	I1107 23:40:53.373901   33391 command_runner.go:130] > RuntimeName:  cri-o
	I1107 23:40:53.373910   33391 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1107 23:40:53.373918   33391 command_runner.go:130] > RuntimeApiVersion:  v1
	I1107 23:40:53.373939   33391 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1107 23:40:53.374018   33391 ssh_runner.go:195] Run: crio --version
	I1107 23:40:53.422335   33391 command_runner.go:130] > crio version 1.24.1
	I1107 23:40:53.422354   33391 command_runner.go:130] > Version:          1.24.1
	I1107 23:40:53.422368   33391 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:40:53.422374   33391 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:40:53.422380   33391 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:40:53.422385   33391 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:40:53.422389   33391 command_runner.go:130] > Compiler:         gc
	I1107 23:40:53.422393   33391 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:40:53.422398   33391 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:40:53.422407   33391 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:40:53.422411   33391 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:40:53.422415   33391 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:40:53.424011   33391 ssh_runner.go:195] Run: crio --version
	I1107 23:40:53.469402   33391 command_runner.go:130] > crio version 1.24.1
	I1107 23:40:53.469427   33391 command_runner.go:130] > Version:          1.24.1
	I1107 23:40:53.469435   33391 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1107 23:40:53.469439   33391 command_runner.go:130] > GitTreeState:     dirty
	I1107 23:40:53.469447   33391 command_runner.go:130] > BuildDate:        2023-11-07T07:32:32Z
	I1107 23:40:53.469452   33391 command_runner.go:130] > GoVersion:        go1.19.9
	I1107 23:40:53.469456   33391 command_runner.go:130] > Compiler:         gc
	I1107 23:40:53.469460   33391 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:40:53.469465   33391 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:40:53.469472   33391 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:40:53.469480   33391 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:40:53.469489   33391 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:40:53.473093   33391 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1107 23:40:53.474435   33391 out.go:177]   - env NO_PROXY=192.168.39.246
	I1107 23:40:53.475802   33391 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.137
	I1107 23:40:53.477140   33391 main.go:141] libmachine: (multinode-553062-m03) Calling .GetIP
	I1107 23:40:53.479902   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:53.480257   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:50:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:29:22 +0000 UTC Type:0 Mac:52:54:00:bf:50:75 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-553062-m03 Clientid:01:52:54:00:bf:50:75}
	I1107 23:40:53.480285   33391 main.go:141] libmachine: (multinode-553062-m03) DBG | domain multinode-553062-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:bf:50:75 in network mk-multinode-553062
	I1107 23:40:53.480480   33391 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:40:53.485151   33391 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1107 23:40:53.485190   33391 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062 for IP: 192.168.39.201
	I1107 23:40:53.485204   33391 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:40:53.485343   33391 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1107 23:40:53.485379   33391 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1107 23:40:53.485390   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:40:53.485406   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:40:53.485420   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:40:53.485433   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:40:53.485510   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1107 23:40:53.485541   33391 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1107 23:40:53.485552   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:40:53.485576   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:40:53.485598   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:40:53.485619   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1107 23:40:53.485655   33391 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:40:53.485683   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem -> /usr/share/ca-certificates/16848.pem
	I1107 23:40:53.485696   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> /usr/share/ca-certificates/168482.pem
	I1107 23:40:53.485706   33391 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:40:53.486043   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:40:53.511414   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:40:53.536247   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:40:53.559668   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 23:40:53.583255   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1107 23:40:53.606459   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1107 23:40:53.630012   33391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:40:53.654801   33391 ssh_runner.go:195] Run: openssl version
	I1107 23:40:53.661411   33391 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1107 23:40:53.661481   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1107 23:40:53.670880   33391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1107 23:40:53.675541   33391 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:40:53.675625   33391 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:40:53.675667   33391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1107 23:40:53.681722   33391 command_runner.go:130] > 51391683
	I1107 23:40:53.681787   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1107 23:40:53.690376   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1107 23:40:53.700072   33391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1107 23:40:53.704703   33391 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:40:53.705074   33391 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:40:53.705139   33391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1107 23:40:53.710817   33391 command_runner.go:130] > 3ec20f2e
	I1107 23:40:53.710882   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:40:53.719127   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:40:53.728503   33391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:40:53.733013   33391 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:40:53.733232   33391 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:40:53.733283   33391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:40:53.738620   33391 command_runner.go:130] > b5213941
	I1107 23:40:53.738677   33391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:40:53.747078   33391 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:40:53.751714   33391 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:40:53.751911   33391 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:40:53.751991   33391 ssh_runner.go:195] Run: crio config
	I1107 23:40:53.804414   33391 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1107 23:40:53.804434   33391 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1107 23:40:53.804441   33391 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1107 23:40:53.804444   33391 command_runner.go:130] > #
	I1107 23:40:53.804451   33391 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1107 23:40:53.804457   33391 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1107 23:40:53.804463   33391 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1107 23:40:53.804471   33391 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1107 23:40:53.804477   33391 command_runner.go:130] > # reload'.
	I1107 23:40:53.804483   33391 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1107 23:40:53.804489   33391 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1107 23:40:53.804495   33391 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1107 23:40:53.804501   33391 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1107 23:40:53.804508   33391 command_runner.go:130] > [crio]
	I1107 23:40:53.804514   33391 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1107 23:40:53.804521   33391 command_runner.go:130] > # containers images, in this directory.
	I1107 23:40:53.804526   33391 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1107 23:40:53.804537   33391 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1107 23:40:53.804542   33391 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1107 23:40:53.804551   33391 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1107 23:40:53.804557   33391 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1107 23:40:53.804564   33391 command_runner.go:130] > storage_driver = "overlay"
	I1107 23:40:53.804570   33391 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1107 23:40:53.804578   33391 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1107 23:40:53.804583   33391 command_runner.go:130] > storage_option = [
	I1107 23:40:53.804603   33391 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1107 23:40:53.804616   33391 command_runner.go:130] > ]
	I1107 23:40:53.804627   33391 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1107 23:40:53.804636   33391 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1107 23:40:53.804728   33391 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1107 23:40:53.804746   33391 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1107 23:40:53.804757   33391 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1107 23:40:53.804765   33391 command_runner.go:130] > # always happen on a node reboot
	I1107 23:40:53.804772   33391 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1107 23:40:53.804781   33391 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1107 23:40:53.804790   33391 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1107 23:40:53.804801   33391 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1107 23:40:53.804809   33391 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1107 23:40:53.804845   33391 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1107 23:40:53.804865   33391 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1107 23:40:53.804872   33391 command_runner.go:130] > # internal_wipe = true
	I1107 23:40:53.804879   33391 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1107 23:40:53.804885   33391 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1107 23:40:53.804893   33391 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1107 23:40:53.804900   33391 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1107 23:40:53.804908   33391 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1107 23:40:53.804913   33391 command_runner.go:130] > [crio.api]
	I1107 23:40:53.804925   33391 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1107 23:40:53.804936   33391 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1107 23:40:53.804949   33391 command_runner.go:130] > # IP address on which the stream server will listen.
	I1107 23:40:53.804959   33391 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1107 23:40:53.804973   33391 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1107 23:40:53.804982   33391 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1107 23:40:53.804993   33391 command_runner.go:130] > # stream_port = "0"
	I1107 23:40:53.805005   33391 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1107 23:40:53.805017   33391 command_runner.go:130] > # stream_enable_tls = false
	I1107 23:40:53.805031   33391 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1107 23:40:53.805041   33391 command_runner.go:130] > # stream_idle_timeout = ""
	I1107 23:40:53.805054   33391 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1107 23:40:53.805069   33391 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1107 23:40:53.805079   33391 command_runner.go:130] > # minutes.
	I1107 23:40:53.805090   33391 command_runner.go:130] > # stream_tls_cert = ""
	I1107 23:40:53.805104   33391 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1107 23:40:53.805118   33391 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1107 23:40:53.805129   33391 command_runner.go:130] > # stream_tls_key = ""
	I1107 23:40:53.805142   33391 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1107 23:40:53.805156   33391 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1107 23:40:53.805165   33391 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1107 23:40:53.805176   33391 command_runner.go:130] > # stream_tls_ca = ""
	I1107 23:40:53.805191   33391 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:40:53.805229   33391 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1107 23:40:53.805245   33391 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:40:53.805257   33391 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1107 23:40:53.805277   33391 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1107 23:40:53.805290   33391 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1107 23:40:53.805300   33391 command_runner.go:130] > [crio.runtime]
	I1107 23:40:53.805311   33391 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1107 23:40:53.805324   33391 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1107 23:40:53.805340   33391 command_runner.go:130] > # "nofile=1024:2048"
	I1107 23:40:53.805354   33391 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1107 23:40:53.805365   33391 command_runner.go:130] > # default_ulimits = [
	I1107 23:40:53.805372   33391 command_runner.go:130] > # ]
	I1107 23:40:53.805382   33391 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1107 23:40:53.805389   33391 command_runner.go:130] > # no_pivot = false
	I1107 23:40:53.805394   33391 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1107 23:40:53.805407   33391 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1107 23:40:53.805418   33391 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1107 23:40:53.805437   33391 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1107 23:40:53.805447   33391 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1107 23:40:53.805462   33391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:40:53.805477   33391 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1107 23:40:53.805485   33391 command_runner.go:130] > # Cgroup setting for conmon
	I1107 23:40:53.805496   33391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1107 23:40:53.805504   33391 command_runner.go:130] > conmon_cgroup = "pod"
	I1107 23:40:53.805514   33391 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1107 23:40:53.805523   33391 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1107 23:40:53.805537   33391 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:40:53.805547   33391 command_runner.go:130] > conmon_env = [
	I1107 23:40:53.805558   33391 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1107 23:40:53.805567   33391 command_runner.go:130] > ]
	I1107 23:40:53.805577   33391 command_runner.go:130] > # Additional environment variables to set for all the
	I1107 23:40:53.805589   33391 command_runner.go:130] > # containers. These are overridden if set in the
	I1107 23:40:53.805601   33391 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1107 23:40:53.805608   33391 command_runner.go:130] > # default_env = [
	I1107 23:40:53.805615   33391 command_runner.go:130] > # ]
	I1107 23:40:53.805633   33391 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1107 23:40:53.805644   33391 command_runner.go:130] > # selinux = false
	I1107 23:40:53.805658   33391 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1107 23:40:53.805672   33391 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1107 23:40:53.805684   33391 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1107 23:40:53.805696   33391 command_runner.go:130] > # seccomp_profile = ""
	I1107 23:40:53.805706   33391 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1107 23:40:53.805719   33391 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1107 23:40:53.805732   33391 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1107 23:40:53.805744   33391 command_runner.go:130] > # which might increase security.
	I1107 23:40:53.805755   33391 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1107 23:40:53.805769   33391 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1107 23:40:53.805782   33391 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1107 23:40:53.805795   33391 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1107 23:40:53.805802   33391 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1107 23:40:53.805812   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:40:53.805824   33391 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1107 23:40:53.805837   33391 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1107 23:40:53.805848   33391 command_runner.go:130] > # the cgroup blockio controller.
	I1107 23:40:53.805859   33391 command_runner.go:130] > # blockio_config_file = ""
	I1107 23:40:53.805877   33391 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1107 23:40:53.805885   33391 command_runner.go:130] > # irqbalance daemon.
	I1107 23:40:53.805890   33391 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1107 23:40:53.805900   33391 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1107 23:40:53.805913   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:40:53.805923   33391 command_runner.go:130] > # rdt_config_file = ""
	I1107 23:40:53.805932   33391 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1107 23:40:53.805940   33391 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1107 23:40:53.805953   33391 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1107 23:40:53.805964   33391 command_runner.go:130] > # separate_pull_cgroup = ""
	I1107 23:40:53.805976   33391 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1107 23:40:53.805990   33391 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1107 23:40:53.806000   33391 command_runner.go:130] > # will be added.
	I1107 23:40:53.806011   33391 command_runner.go:130] > # default_capabilities = [
	I1107 23:40:53.806017   33391 command_runner.go:130] > # 	"CHOWN",
	I1107 23:40:53.806085   33391 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1107 23:40:53.806101   33391 command_runner.go:130] > # 	"FSETID",
	I1107 23:40:53.806108   33391 command_runner.go:130] > # 	"FOWNER",
	I1107 23:40:53.806114   33391 command_runner.go:130] > # 	"SETGID",
	I1107 23:40:53.806121   33391 command_runner.go:130] > # 	"SETUID",
	I1107 23:40:53.806130   33391 command_runner.go:130] > # 	"SETPCAP",
	I1107 23:40:53.806138   33391 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1107 23:40:53.806147   33391 command_runner.go:130] > # 	"KILL",
	I1107 23:40:53.806153   33391 command_runner.go:130] > # ]
	I1107 23:40:53.806169   33391 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1107 23:40:53.806182   33391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:40:53.806192   33391 command_runner.go:130] > # default_sysctls = [
	I1107 23:40:53.806200   33391 command_runner.go:130] > # ]
	I1107 23:40:53.806213   33391 command_runner.go:130] > # List of devices on the host that a
	I1107 23:40:53.806223   33391 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1107 23:40:53.806229   33391 command_runner.go:130] > # allowed_devices = [
	I1107 23:40:53.806236   33391 command_runner.go:130] > # 	"/dev/fuse",
	I1107 23:40:53.806241   33391 command_runner.go:130] > # ]
	I1107 23:40:53.806250   33391 command_runner.go:130] > # List of additional devices. specified as
	I1107 23:40:53.806266   33391 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1107 23:40:53.806279   33391 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1107 23:40:53.806301   33391 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:40:53.806308   33391 command_runner.go:130] > # additional_devices = [
	I1107 23:40:53.806312   33391 command_runner.go:130] > # ]
	I1107 23:40:53.806317   33391 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1107 23:40:53.806325   33391 command_runner.go:130] > # cdi_spec_dirs = [
	I1107 23:40:53.806331   33391 command_runner.go:130] > # 	"/etc/cdi",
	I1107 23:40:53.806341   33391 command_runner.go:130] > # 	"/var/run/cdi",
	I1107 23:40:53.806347   33391 command_runner.go:130] > # ]
	I1107 23:40:53.806361   33391 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1107 23:40:53.806374   33391 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1107 23:40:53.806384   33391 command_runner.go:130] > # Defaults to false.
	I1107 23:40:53.806393   33391 command_runner.go:130] > # device_ownership_from_security_context = false
	I1107 23:40:53.806406   33391 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1107 23:40:53.806416   33391 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1107 23:40:53.806424   33391 command_runner.go:130] > # hooks_dir = [
	I1107 23:40:53.806432   33391 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1107 23:40:53.806440   33391 command_runner.go:130] > # ]
	I1107 23:40:53.806450   33391 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1107 23:40:53.806465   33391 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1107 23:40:53.806476   33391 command_runner.go:130] > # its default mounts from the following two files:
	I1107 23:40:53.806485   33391 command_runner.go:130] > #
	I1107 23:40:53.806496   33391 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1107 23:40:53.806510   33391 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1107 23:40:53.806523   33391 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1107 23:40:53.806532   33391 command_runner.go:130] > #
	I1107 23:40:53.806542   33391 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1107 23:40:53.806556   33391 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1107 23:40:53.806570   33391 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1107 23:40:53.806583   33391 command_runner.go:130] > #      only add mounts it finds in this file.
	I1107 23:40:53.806590   33391 command_runner.go:130] > #
	I1107 23:40:53.806598   33391 command_runner.go:130] > # default_mounts_file = ""
	I1107 23:40:53.806610   33391 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1107 23:40:53.806625   33391 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1107 23:40:53.806634   33391 command_runner.go:130] > pids_limit = 1024
	I1107 23:40:53.806648   33391 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1107 23:40:53.806660   33391 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1107 23:40:53.806673   33391 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1107 23:40:53.806691   33391 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1107 23:40:53.806702   33391 command_runner.go:130] > # log_size_max = -1
	I1107 23:40:53.806714   33391 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1107 23:40:53.806725   33391 command_runner.go:130] > # log_to_journald = false
	I1107 23:40:53.806737   33391 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1107 23:40:53.806749   33391 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1107 23:40:53.806758   33391 command_runner.go:130] > # Path to directory for container attach sockets.
	I1107 23:40:53.806767   33391 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1107 23:40:53.806776   33391 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1107 23:40:53.806784   33391 command_runner.go:130] > # bind_mount_prefix = ""
	I1107 23:40:53.806795   33391 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1107 23:40:53.806804   33391 command_runner.go:130] > # read_only = false
	I1107 23:40:53.806814   33391 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1107 23:40:53.806829   33391 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1107 23:40:53.806837   33391 command_runner.go:130] > # live configuration reload.
	I1107 23:40:53.806848   33391 command_runner.go:130] > # log_level = "info"
	I1107 23:40:53.806858   33391 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1107 23:40:53.806870   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:40:53.806880   33391 command_runner.go:130] > # log_filter = ""
	I1107 23:40:53.806890   33391 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1107 23:40:53.806900   33391 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1107 23:40:53.806908   33391 command_runner.go:130] > # separated by comma.
	I1107 23:40:53.806917   33391 command_runner.go:130] > # uid_mappings = ""
	I1107 23:40:53.806927   33391 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1107 23:40:53.806942   33391 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1107 23:40:53.806953   33391 command_runner.go:130] > # separated by comma.
	I1107 23:40:53.806961   33391 command_runner.go:130] > # gid_mappings = ""
	I1107 23:40:53.806975   33391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1107 23:40:53.806988   33391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:40:53.807000   33391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:40:53.807007   33391 command_runner.go:130] > # minimum_mappable_uid = -1
	I1107 23:40:53.807034   33391 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1107 23:40:53.807043   33391 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:40:53.807050   33391 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:40:53.807057   33391 command_runner.go:130] > # minimum_mappable_gid = -1
	I1107 23:40:53.807063   33391 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1107 23:40:53.807073   33391 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1107 23:40:53.807082   33391 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1107 23:40:53.807086   33391 command_runner.go:130] > # ctr_stop_timeout = 30
	I1107 23:40:53.807094   33391 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1107 23:40:53.807101   33391 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1107 23:40:53.807108   33391 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1107 23:40:53.807113   33391 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1107 23:40:53.807119   33391 command_runner.go:130] > drop_infra_ctr = false
	I1107 23:40:53.807125   33391 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1107 23:40:53.807133   33391 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1107 23:40:53.807141   33391 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1107 23:40:53.807148   33391 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1107 23:40:53.807154   33391 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1107 23:40:53.807161   33391 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1107 23:40:53.807165   33391 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1107 23:40:53.807175   33391 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1107 23:40:53.807181   33391 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1107 23:40:53.807187   33391 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1107 23:40:53.807196   33391 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1107 23:40:53.807202   33391 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1107 23:40:53.807209   33391 command_runner.go:130] > # default_runtime = "runc"
	I1107 23:40:53.807214   33391 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1107 23:40:53.807223   33391 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1107 23:40:53.807235   33391 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1107 23:40:53.807240   33391 command_runner.go:130] > # creation as a file is not desired either.
	I1107 23:40:53.807248   33391 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1107 23:40:53.807256   33391 command_runner.go:130] > # the hostname is being managed dynamically.
	I1107 23:40:53.807261   33391 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1107 23:40:53.807265   33391 command_runner.go:130] > # ]
	I1107 23:40:53.807273   33391 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1107 23:40:53.807282   33391 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1107 23:40:53.807289   33391 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1107 23:40:53.807298   33391 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1107 23:40:53.807301   33391 command_runner.go:130] > #
	I1107 23:40:53.807306   33391 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1107 23:40:53.807314   33391 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1107 23:40:53.807318   33391 command_runner.go:130] > #  runtime_type = "oci"
	I1107 23:40:53.807325   33391 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1107 23:40:53.807330   33391 command_runner.go:130] > #  privileged_without_host_devices = false
	I1107 23:40:53.807335   33391 command_runner.go:130] > #  allowed_annotations = []
	I1107 23:40:53.807338   33391 command_runner.go:130] > # Where:
	I1107 23:40:53.807344   33391 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1107 23:40:53.807352   33391 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1107 23:40:53.807358   33391 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1107 23:40:53.807367   33391 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1107 23:40:53.807374   33391 command_runner.go:130] > #   in $PATH.
	I1107 23:40:53.807381   33391 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1107 23:40:53.807388   33391 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1107 23:40:53.807394   33391 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1107 23:40:53.807400   33391 command_runner.go:130] > #   state.
	I1107 23:40:53.807407   33391 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1107 23:40:53.807416   33391 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1107 23:40:53.807422   33391 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1107 23:40:53.807430   33391 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1107 23:40:53.807438   33391 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1107 23:40:53.807447   33391 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1107 23:40:53.807452   33391 command_runner.go:130] > #   The currently recognized values are:
	I1107 23:40:53.807461   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1107 23:40:53.807470   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1107 23:40:53.807478   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1107 23:40:53.807487   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1107 23:40:53.807497   33391 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1107 23:40:53.807517   33391 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1107 23:40:53.807526   33391 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1107 23:40:53.807532   33391 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1107 23:40:53.807540   33391 command_runner.go:130] > #   should be moved to the container's cgroup
	I1107 23:40:53.807545   33391 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1107 23:40:53.807550   33391 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1107 23:40:53.807555   33391 command_runner.go:130] > runtime_type = "oci"
	I1107 23:40:53.807561   33391 command_runner.go:130] > runtime_root = "/run/runc"
	I1107 23:40:53.807566   33391 command_runner.go:130] > runtime_config_path = ""
	I1107 23:40:53.807572   33391 command_runner.go:130] > monitor_path = ""
	I1107 23:40:53.807576   33391 command_runner.go:130] > monitor_cgroup = ""
	I1107 23:40:53.807580   33391 command_runner.go:130] > monitor_exec_cgroup = ""
	I1107 23:40:53.807586   33391 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1107 23:40:53.807593   33391 command_runner.go:130] > # running containers
	I1107 23:40:53.807597   33391 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1107 23:40:53.807605   33391 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1107 23:40:53.807628   33391 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1107 23:40:53.807636   33391 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1107 23:40:53.807641   33391 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1107 23:40:53.807648   33391 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1107 23:40:53.807653   33391 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1107 23:40:53.807660   33391 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1107 23:40:53.807665   33391 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1107 23:40:53.807670   33391 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1107 23:40:53.807679   33391 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1107 23:40:53.807684   33391 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1107 23:40:53.807691   33391 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1107 23:40:53.807698   33391 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1107 23:40:53.807708   33391 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1107 23:40:53.807713   33391 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1107 23:40:53.807723   33391 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1107 23:40:53.807733   33391 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1107 23:40:53.807740   33391 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1107 23:40:53.807747   33391 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1107 23:40:53.807753   33391 command_runner.go:130] > # Example:
	I1107 23:40:53.807758   33391 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1107 23:40:53.807766   33391 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1107 23:40:53.807771   33391 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1107 23:40:53.807779   33391 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1107 23:40:53.807783   33391 command_runner.go:130] > # cpuset = 0
	I1107 23:40:53.807789   33391 command_runner.go:130] > # cpushares = "0-1"
	I1107 23:40:53.807793   33391 command_runner.go:130] > # Where:
	I1107 23:40:53.807797   33391 command_runner.go:130] > # The workload name is workload-type.
	I1107 23:40:53.807807   33391 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1107 23:40:53.807812   33391 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1107 23:40:53.807820   33391 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1107 23:40:53.807828   33391 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1107 23:40:53.807835   33391 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1107 23:40:53.807839   33391 command_runner.go:130] > # 
	I1107 23:40:53.807847   33391 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1107 23:40:53.807850   33391 command_runner.go:130] > #
	I1107 23:40:53.807856   33391 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1107 23:40:53.807864   33391 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1107 23:40:53.807870   33391 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1107 23:40:53.807878   33391 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1107 23:40:53.807884   33391 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1107 23:40:53.807889   33391 command_runner.go:130] > [crio.image]
	I1107 23:40:53.807895   33391 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1107 23:40:53.807902   33391 command_runner.go:130] > # default_transport = "docker://"
	I1107 23:40:53.807908   33391 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1107 23:40:53.807916   33391 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:40:53.807920   33391 command_runner.go:130] > # global_auth_file = ""
	I1107 23:40:53.807925   33391 command_runner.go:130] > # The image used to instantiate infra containers.
	I1107 23:40:53.807933   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:40:53.807937   33391 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1107 23:40:53.807944   33391 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1107 23:40:53.807952   33391 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:40:53.807959   33391 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:40:53.807964   33391 command_runner.go:130] > # pause_image_auth_file = ""
	I1107 23:40:53.807972   33391 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1107 23:40:53.807980   33391 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1107 23:40:53.807985   33391 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1107 23:40:53.808007   33391 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1107 23:40:53.808014   33391 command_runner.go:130] > # pause_command = "/pause"
	I1107 23:40:53.808023   33391 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1107 23:40:53.808036   33391 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1107 23:40:53.808049   33391 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1107 23:40:53.808058   33391 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1107 23:40:53.808063   33391 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1107 23:40:53.808073   33391 command_runner.go:130] > # signature_policy = ""
	I1107 23:40:53.808079   33391 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1107 23:40:53.808088   33391 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1107 23:40:53.808092   33391 command_runner.go:130] > # changing them here.
	I1107 23:40:53.808099   33391 command_runner.go:130] > # insecure_registries = [
	I1107 23:40:53.808102   33391 command_runner.go:130] > # ]
	I1107 23:40:53.808114   33391 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1107 23:40:53.808126   33391 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1107 23:40:53.808137   33391 command_runner.go:130] > # image_volumes = "mkdir"
	I1107 23:40:53.808148   33391 command_runner.go:130] > # Temporary directory to use for storing big files
	I1107 23:40:53.808153   33391 command_runner.go:130] > # big_files_temporary_dir = ""
	I1107 23:40:53.808160   33391 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1107 23:40:53.808165   33391 command_runner.go:130] > # CNI plugins.
	I1107 23:40:53.808169   33391 command_runner.go:130] > [crio.network]
	I1107 23:40:53.808178   33391 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1107 23:40:53.808183   33391 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1107 23:40:53.808189   33391 command_runner.go:130] > # cni_default_network = ""
	I1107 23:40:53.808197   33391 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1107 23:40:53.808208   33391 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1107 23:40:53.808222   33391 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1107 23:40:53.808229   33391 command_runner.go:130] > # plugin_dirs = [
	I1107 23:40:53.808240   33391 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1107 23:40:53.808246   33391 command_runner.go:130] > # ]
	I1107 23:40:53.808257   33391 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1107 23:40:53.808264   33391 command_runner.go:130] > [crio.metrics]
	I1107 23:40:53.808269   33391 command_runner.go:130] > # Globally enable or disable metrics support.
	I1107 23:40:53.808276   33391 command_runner.go:130] > enable_metrics = true
	I1107 23:40:53.808280   33391 command_runner.go:130] > # Specify enabled metrics collectors.
	I1107 23:40:53.808285   33391 command_runner.go:130] > # Per default all metrics are enabled.
	I1107 23:40:53.808294   33391 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1107 23:40:53.808304   33391 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1107 23:40:53.808318   33391 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1107 23:40:53.808326   33391 command_runner.go:130] > # metrics_collectors = [
	I1107 23:40:53.808336   33391 command_runner.go:130] > # 	"operations",
	I1107 23:40:53.808347   33391 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1107 23:40:53.808358   33391 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1107 23:40:53.808368   33391 command_runner.go:130] > # 	"operations_errors",
	I1107 23:40:53.808374   33391 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1107 23:40:53.808384   33391 command_runner.go:130] > # 	"image_pulls_by_name",
	I1107 23:40:53.808394   33391 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1107 23:40:53.808401   33391 command_runner.go:130] > # 	"image_pulls_failures",
	I1107 23:40:53.808412   33391 command_runner.go:130] > # 	"image_pulls_successes",
	I1107 23:40:53.808422   33391 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1107 23:40:53.808428   33391 command_runner.go:130] > # 	"image_layer_reuse",
	I1107 23:40:53.808438   33391 command_runner.go:130] > # 	"containers_oom_total",
	I1107 23:40:53.808445   33391 command_runner.go:130] > # 	"containers_oom",
	I1107 23:40:53.808455   33391 command_runner.go:130] > # 	"processes_defunct",
	I1107 23:40:53.808462   33391 command_runner.go:130] > # 	"operations_total",
	I1107 23:40:53.808473   33391 command_runner.go:130] > # 	"operations_latency_seconds",
	I1107 23:40:53.808485   33391 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1107 23:40:53.808493   33391 command_runner.go:130] > # 	"operations_errors_total",
	I1107 23:40:53.808504   33391 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1107 23:40:53.808515   33391 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1107 23:40:53.808521   33391 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1107 23:40:53.808528   33391 command_runner.go:130] > # 	"image_pulls_success_total",
	I1107 23:40:53.808532   33391 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1107 23:40:53.808538   33391 command_runner.go:130] > # 	"containers_oom_count_total",
	I1107 23:40:53.808542   33391 command_runner.go:130] > # ]
	I1107 23:40:53.808549   33391 command_runner.go:130] > # The port on which the metrics server will listen.
	I1107 23:40:53.808554   33391 command_runner.go:130] > # metrics_port = 9090
	I1107 23:40:53.808561   33391 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1107 23:40:53.808565   33391 command_runner.go:130] > # metrics_socket = ""
	I1107 23:40:53.808570   33391 command_runner.go:130] > # The certificate for the secure metrics server.
	I1107 23:40:53.808580   33391 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1107 23:40:53.808586   33391 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1107 23:40:53.808594   33391 command_runner.go:130] > # certificate on any modification event.
	I1107 23:40:53.808598   33391 command_runner.go:130] > # metrics_cert = ""
	I1107 23:40:53.808603   33391 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1107 23:40:53.808611   33391 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1107 23:40:53.808615   33391 command_runner.go:130] > # metrics_key = ""
	I1107 23:40:53.808622   33391 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1107 23:40:53.808627   33391 command_runner.go:130] > [crio.tracing]
	I1107 23:40:53.808634   33391 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1107 23:40:53.808639   33391 command_runner.go:130] > # enable_tracing = false
	I1107 23:40:53.808647   33391 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1107 23:40:53.808652   33391 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1107 23:40:53.808661   33391 command_runner.go:130] > # Number of samples to collect per million spans.
	I1107 23:40:53.808666   33391 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1107 23:40:53.808674   33391 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1107 23:40:53.808678   33391 command_runner.go:130] > [crio.stats]
	I1107 23:40:53.808687   33391 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1107 23:40:53.808692   33391 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1107 23:40:53.808699   33391 command_runner.go:130] > # stats_collection_period = 0
	I1107 23:40:53.808731   33391 command_runner.go:130] ! time="2023-11-07 23:40:53.795208386Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1107 23:40:53.808744   33391 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1107 23:40:53.808794   33391 cni.go:84] Creating CNI manager for ""
	I1107 23:40:53.808802   33391 cni.go:136] 3 nodes found, recommending kindnet
	I1107 23:40:53.808810   33391 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:40:53.808850   33391 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553062 NodeName:multinode-553062-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:40:53.808945   33391 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553062-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:40:53.808991   33391 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553062-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:40:53.809035   33391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:40:53.820282   33391 command_runner.go:130] > kubeadm
	I1107 23:40:53.820302   33391 command_runner.go:130] > kubectl
	I1107 23:40:53.820308   33391 command_runner.go:130] > kubelet
	I1107 23:40:53.820567   33391 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:40:53.820628   33391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1107 23:40:53.831882   33391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1107 23:40:53.849256   33391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:40:53.865246   33391 ssh_runner.go:195] Run: grep 192.168.39.246	control-plane.minikube.internal$ /etc/hosts
	I1107 23:40:53.869140   33391 command_runner.go:130] > 192.168.39.246	control-plane.minikube.internal
	I1107 23:40:53.869215   33391 host.go:66] Checking if "multinode-553062" exists ...
	I1107 23:40:53.869507   33391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:40:53.869541   33391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:40:53.869540   33391 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:40:53.883921   33391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I1107 23:40:53.884297   33391 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:40:53.884673   33391 main.go:141] libmachine: Using API Version  1
	I1107 23:40:53.884691   33391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:40:53.885055   33391 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:40:53.885285   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:40:53.885431   33391 start.go:304] JoinCluster: &{Name:multinode-553062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-553062 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.137 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.201 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:40:53.885545   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1107 23:40:53.885563   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:40:53.888149   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:40:53.888512   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:40:53.888532   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:40:53.888697   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:40:53.888873   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:40:53.889016   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:40:53.889143   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:40:54.068445   33391 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rfotym.g6t8ufedws0b50qx --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1107 23:40:54.068547   33391 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.201 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1107 23:40:54.068600   33391 host.go:66] Checking if "multinode-553062" exists ...
	I1107 23:40:54.069021   33391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:40:54.069073   33391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:40:54.082934   33391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35701
	I1107 23:40:54.083356   33391 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:40:54.083779   33391 main.go:141] libmachine: Using API Version  1
	I1107 23:40:54.083800   33391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:40:54.084069   33391 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:40:54.084266   33391 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:40:54.084436   33391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-553062-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1107 23:40:54.084456   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:40:54.087318   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:40:54.087778   33391 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:36:53 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:40:54.087803   33391 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:40:54.087967   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:40:54.088119   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:40:54.088247   33391 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:40:54.088401   33391 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:40:54.282358   33391 command_runner.go:130] > node/multinode-553062-m03 cordoned
	I1107 23:40:57.315832   33391 command_runner.go:130] > pod "busybox-5bc68d56bd-x55ww" has DeletionTimestamp older than 1 seconds, skipping
	I1107 23:40:57.315861   33391 command_runner.go:130] > node/multinode-553062-m03 drained
	I1107 23:40:57.317586   33391 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1107 23:40:57.317605   33391 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-g8624, kube-system/kube-proxy-xwp5j
	I1107 23:40:57.317625   33391 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-553062-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.233168884s)
	I1107 23:40:57.317635   33391 node.go:108] successfully drained node "m03"
	I1107 23:40:57.317970   33391 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:40:57.318205   33391 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:40:57.318493   33391 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1107 23:40:57.318548   33391 round_trippers.go:463] DELETE https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:40:57.318559   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:57.318571   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:57.318583   33391 round_trippers.go:473]     Content-Type: application/json
	I1107 23:40:57.318595   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:57.330729   33391 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1107 23:40:57.330747   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:57.330753   33391 round_trippers.go:580]     Content-Length: 171
	I1107 23:40:57.330759   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:57 GMT
	I1107 23:40:57.330771   33391 round_trippers.go:580]     Audit-Id: 4c6b4828-2f55-46aa-9862-966467385f8e
	I1107 23:40:57.330776   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:57.330781   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:57.330788   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:57.330796   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:57.330819   33391 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-553062-m03","kind":"nodes","uid":"c69b0e89-b34f-4710-b818-78e5076041aa"}}
	I1107 23:40:57.330850   33391 node.go:124] successfully deleted node "m03"
	I1107 23:40:57.330859   33391 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.201 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1107 23:40:57.330875   33391 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.201 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1107 23:40:57.330892   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rfotym.g6t8ufedws0b50qx --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553062-m03"
	I1107 23:40:57.403518   33391 command_runner.go:130] ! W1107 23:40:57.398050    2322 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1107 23:40:57.403950   33391 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1107 23:40:57.580166   33391 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1107 23:40:57.580195   33391 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1107 23:40:58.356068   33391 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 23:40:58.356096   33391 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 23:40:58.356110   33391 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 23:40:58.356123   33391 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:40:58.356135   33391 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:40:58.356146   33391 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 23:40:58.356156   33391 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1107 23:40:58.356168   33391 command_runner.go:130] > This node has joined the cluster:
	I1107 23:40:58.356182   33391 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1107 23:40:58.356195   33391 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1107 23:40:58.356208   33391 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1107 23:40:58.356389   33391 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rfotym.g6t8ufedws0b50qx --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553062-m03": (1.025476799s)
	I1107 23:40:58.356417   33391 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1107 23:40:58.648167   33391 start.go:306] JoinCluster complete in 4.762729235s
	I1107 23:40:58.648201   33391 cni.go:84] Creating CNI manager for ""
	I1107 23:40:58.648209   33391 cni.go:136] 3 nodes found, recommending kindnet
	I1107 23:40:58.648266   33391 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:40:58.655308   33391 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 23:40:58.655334   33391 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1107 23:40:58.655341   33391 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1107 23:40:58.655348   33391 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:40:58.655353   33391 command_runner.go:130] > Access: 2023-11-07 23:36:53.922905698 +0000
	I1107 23:40:58.655358   33391 command_runner.go:130] > Modify: 2023-11-07 07:42:50.000000000 +0000
	I1107 23:40:58.655365   33391 command_runner.go:130] > Change: 2023-11-07 23:36:52.115905698 +0000
	I1107 23:40:58.655372   33391 command_runner.go:130] >  Birth: -
	I1107 23:40:58.655416   33391 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:40:58.655430   33391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:40:58.673909   33391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:40:59.044561   33391 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:40:59.048785   33391 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:40:59.052910   33391 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1107 23:40:59.067054   33391 command_runner.go:130] > daemonset.apps/kindnet configured
	I1107 23:40:59.070005   33391 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:40:59.070309   33391 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:40:59.070651   33391 round_trippers.go:463] GET https://192.168.39.246:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:40:59.070666   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.070678   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.070687   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.073310   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:40:59.073331   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.073341   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.073349   33391 round_trippers.go:580]     Content-Length: 291
	I1107 23:40:59.073362   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.073374   33391 round_trippers.go:580]     Audit-Id: 4f7bb01c-c013-4903-9afd-2f6385cc4595
	I1107 23:40:59.073385   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.073394   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.073405   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.073512   33391 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"99a4298f-5274-4bac-956d-86f8091a0b82","resourceVersion":"859","creationTimestamp":"2023-11-07T23:26:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1107 23:40:59.073620   33391 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553062" context rescaled to 1 replicas
	I1107 23:40:59.073657   33391 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.201 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}
	I1107 23:40:59.075274   33391 out.go:177] * Verifying Kubernetes components...
	I1107 23:40:59.076647   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:40:59.090764   33391 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:40:59.090968   33391 kapi.go:59] client config for multinode-553062: &rest.Config{Host:"https://192.168.39.246:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/multinode-553062/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:40:59.091176   33391 node_ready.go:35] waiting up to 6m0s for node "multinode-553062-m03" to be "Ready" ...
	I1107 23:40:59.091230   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:40:59.091236   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.091244   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.091252   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.093696   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:40:59.093717   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.093726   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.093734   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.093742   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.093755   33391 round_trippers.go:580]     Audit-Id: f00e6750-f835-46cd-b3b0-c201d3fca5b4
	I1107 23:40:59.093763   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.093771   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.093934   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m03","uid":"635dc90f-f541-4a77-89a1-07612efbc53a","resourceVersion":"1185","creationTimestamp":"2023-11-07T23:40:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:40:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:40:58Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1107 23:40:59.094203   33391 node_ready.go:49] node "multinode-553062-m03" has status "Ready":"True"
	I1107 23:40:59.094218   33391 node_ready.go:38] duration metric: took 3.02853ms waiting for node "multinode-553062-m03" to be "Ready" ...
	I1107 23:40:59.094228   33391 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:40:59.094297   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I1107 23:40:59.094306   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.094317   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.094330   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.098069   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:40:59.098090   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.098100   33391 round_trippers.go:580]     Audit-Id: cb39be73-bf11-4c61-83c3-c0f2dd646460
	I1107 23:40:59.098109   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.098119   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.098128   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.098137   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.098149   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.099308   33391 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1191"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"848","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82084 chars]
	I1107 23:40:59.101757   33391 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.101829   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6ggfr
	I1107 23:40:59.101840   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.101850   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.101860   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.104054   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:40:59.104073   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.104082   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.104090   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.104100   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.104109   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.104121   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.104128   33391 round_trippers.go:580]     Audit-Id: 0d681af7-4805-4a70-95e5-a6daa8c56ffe
	I1107 23:40:59.104264   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6ggfr","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"785c6064-d793-4959-8e34-28b4fc2549fc","resourceVersion":"848","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b131694e-1b3b-40e6-bc1b-3f62a604903c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b131694e-1b3b-40e6-bc1b-3f62a604903c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1107 23:40:59.104741   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:40:59.104757   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.104765   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.104772   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.107049   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:40:59.107070   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.107080   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.107089   33391 round_trippers.go:580]     Audit-Id: 39aada0b-4a0d-44d6-befd-623fce9444da
	I1107 23:40:59.107097   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.107113   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.107121   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.107133   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.107395   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:40:59.107660   33391 pod_ready.go:92] pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace has status "Ready":"True"
	I1107 23:40:59.107675   33391 pod_ready.go:81] duration metric: took 5.897805ms waiting for pod "coredns-5dd5756b68-6ggfr" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.107688   33391 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.107736   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553062
	I1107 23:40:59.107746   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.107756   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.107766   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.110320   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:40:59.110341   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.110350   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.110359   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.110371   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.110379   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.110390   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.110402   33391 round_trippers.go:580]     Audit-Id: ba5c2e14-741d-4352-a42c-4321163fed44
	I1107 23:40:59.110698   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553062","namespace":"kube-system","uid":"3819c5f8-686f-4ce6-95fb-e9d5bb68cbc1","resourceVersion":"839","creationTimestamp":"2023-11-07T23:26:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.246:2379","kubernetes.io/config.hash":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.mirror":"f82562fbdca14daeb385ae6968954f46","kubernetes.io/config.seen":"2023-11-07T23:26:48.362630200Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1107 23:40:59.111066   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:40:59.111081   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.111088   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.111093   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.113439   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:40:59.113455   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.113463   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.113471   33391 round_trippers.go:580]     Audit-Id: 2016c131-83fd-4deb-ba66-703de474e318
	I1107 23:40:59.113480   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.113495   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.113504   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.113516   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.113686   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:40:59.114058   33391 pod_ready.go:92] pod "etcd-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:40:59.114077   33391 pod_ready.go:81] duration metric: took 6.381363ms waiting for pod "etcd-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.114094   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.114141   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553062
	I1107 23:40:59.114149   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.114156   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.114166   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.116044   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:40:59.116057   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.116073   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.116083   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.116096   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.116106   33391 round_trippers.go:580]     Audit-Id: 8497b332-7140-424f-bdb4-5945d666bc2f
	I1107 23:40:59.116118   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.116130   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.116467   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553062","namespace":"kube-system","uid":"30896fa0-3d8f-4861-bdf5-ad94796ad097","resourceVersion":"841","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.246:8443","kubernetes.io/config.hash":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.mirror":"cf3161d745dce4ca9e35cf659a0b5ec9","kubernetes.io/config.seen":"2023-11-07T23:26:57.103263110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1107 23:40:59.116803   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:40:59.116825   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.116835   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.116845   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.119000   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:40:59.119018   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.119035   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.119047   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.119059   33391 round_trippers.go:580]     Audit-Id: fef56a84-a50b-4e28-acbf-7a68c71fd2e0
	I1107 23:40:59.119070   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.119081   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.119092   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.119261   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:40:59.119543   33391 pod_ready.go:92] pod "kube-apiserver-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:40:59.119558   33391 pod_ready.go:81] duration metric: took 5.45485ms waiting for pod "kube-apiserver-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.119569   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.119614   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553062
	I1107 23:40:59.119623   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.119633   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.119643   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.121681   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:40:59.121699   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.121709   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.121719   33391 round_trippers.go:580]     Audit-Id: a5c0f5c4-ad18-4257-b24f-96bed1f48ec4
	I1107 23:40:59.121730   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.121741   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.121752   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.121764   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.122091   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553062","namespace":"kube-system","uid":"5a895945-b908-44ba-a1c8-93245f6a93f5","resourceVersion":"842","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.mirror":"6355e861fae0971467df802e2b4d8be6","kubernetes.io/config.seen":"2023-11-07T23:26:57.103264314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1107 23:40:59.122532   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:40:59.122550   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.122560   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.122572   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.124497   33391 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:40:59.124511   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.124520   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.124529   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.124540   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.124555   33391 round_trippers.go:580]     Audit-Id: 4a631a41-0bcc-4408-aa73-f145ca44da82
	I1107 23:40:59.124565   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.124576   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.124802   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:40:59.125168   33391 pod_ready.go:92] pod "kube-controller-manager-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:40:59.125187   33391 pod_ready.go:81] duration metric: took 5.610637ms waiting for pod "kube-controller-manager-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.125199   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.291552   33391 request.go:629] Waited for 166.27202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:40:59.291622   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-944rz
	I1107 23:40:59.291628   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.291641   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.291651   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.294851   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:40:59.294877   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.294887   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.294895   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.294904   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.294911   33391 round_trippers.go:580]     Audit-Id: 61d269f5-99b5-43af-bb58-fcafe4d7f711
	I1107 23:40:59.294916   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.294921   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.295657   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-944rz","generateName":"kube-proxy-","namespace":"kube-system","uid":"db20b1cf-b422-4649-a6e1-4549c4c56f33","resourceVersion":"772","creationTimestamp":"2023-11-07T23:27:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1107 23:40:59.491350   33391 request.go:629] Waited for 195.254994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:40:59.491420   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:40:59.491428   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.491436   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.491444   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.494803   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:40:59.494823   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.494830   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.494835   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.494840   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.494845   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.494851   33391 round_trippers.go:580]     Audit-Id: 013ccfd3-7e89-4e71-b33b-b5c509f49550
	I1107 23:40:59.494862   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.494999   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:40:59.495328   33391 pod_ready.go:92] pod "kube-proxy-944rz" in "kube-system" namespace has status "Ready":"True"
	I1107 23:40:59.495344   33391 pod_ready.go:81] duration metric: took 370.13311ms waiting for pod "kube-proxy-944rz" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.495354   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.691797   33391 request.go:629] Waited for 196.37798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:40:59.691878   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rktlk
	I1107 23:40:59.691890   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.691901   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.691914   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.695294   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:40:59.695317   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.695329   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.695339   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.695349   33391 round_trippers.go:580]     Audit-Id: c266eea4-130c-405b-9b4a-d478c7b25b35
	I1107 23:40:59.695358   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.695368   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.695375   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.696081   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rktlk","generateName":"kube-proxy-","namespace":"kube-system","uid":"92ea69ee-cd72-4594-a338-9837cc44e5a1","resourceVersion":"1030","creationTimestamp":"2023-11-07T23:27:50Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:27:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5730 chars]
	I1107 23:40:59.891916   33391 request.go:629] Waited for 195.39113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:40:59.891983   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m02
	I1107 23:40:59.891988   33391 round_trippers.go:469] Request Headers:
	I1107 23:40:59.891996   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:40:59.892007   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:40:59.896112   33391 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:40:59.896137   33391 round_trippers.go:577] Response Headers:
	I1107 23:40:59.896144   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:40:59.896149   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:40:59.896155   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:40:59.896160   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:40:59.896166   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:40:59 GMT
	I1107 23:40:59.896171   33391 round_trippers.go:580]     Audit-Id: c232e8ff-3cb4-48d6-a105-b06d888b1c4b
	I1107 23:40:59.896409   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m02","uid":"4d60d501-112e-48fa-9d2b-2a6a7823e694","resourceVersion":"1011","creationTimestamp":"2023-11-07T23:39:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:39:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:39:17Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1107 23:40:59.896764   33391 pod_ready.go:92] pod "kube-proxy-rktlk" in "kube-system" namespace has status "Ready":"True"
	I1107 23:40:59.896789   33391 pod_ready.go:81] duration metric: took 401.425618ms waiting for pod "kube-proxy-rktlk" in "kube-system" namespace to be "Ready" ...
	I1107 23:40:59.896801   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xwp5j" in "kube-system" namespace to be "Ready" ...
	I1107 23:41:00.092245   33391 request.go:629] Waited for 195.357022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwp5j
	I1107 23:41:00.092338   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwp5j
	I1107 23:41:00.092350   33391 round_trippers.go:469] Request Headers:
	I1107 23:41:00.092361   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:41:00.092374   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:41:00.094975   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:41:00.094990   33391 round_trippers.go:577] Response Headers:
	I1107 23:41:00.094997   33391 round_trippers.go:580]     Audit-Id: 94442d14-5a36-4e52-9e7e-63ef2e0badc0
	I1107 23:41:00.095003   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:41:00.095010   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:41:00.095015   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:41:00.095021   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:41:00.095026   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:41:00 GMT
	I1107 23:41:00.095166   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xwp5j","generateName":"kube-proxy-","namespace":"kube-system","uid":"0347e6b5-3070-4b6a-ae2b-d1ac56a385cd","resourceVersion":"1202","creationTimestamp":"2023-11-07T23:28:45Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"072addbc-9bf2-4d6f-93c3-120a159f2721","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:28:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"072addbc-9bf2-4d6f-93c3-120a159f2721\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5730 chars]
	I1107 23:41:00.291935   33391 request.go:629] Waited for 196.275705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:41:00.292015   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062-m03
	I1107 23:41:00.292026   33391 round_trippers.go:469] Request Headers:
	I1107 23:41:00.292038   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:41:00.292045   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:41:00.295076   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:41:00.295099   33391 round_trippers.go:577] Response Headers:
	I1107 23:41:00.295109   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:41:00.295118   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:41:00.295126   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:41:00.295135   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:41:00.295143   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:41:00 GMT
	I1107 23:41:00.295151   33391 round_trippers.go:580]     Audit-Id: 7a127ebd-de44-4ff8-82e3-e844219a2324
	I1107 23:41:00.295297   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062-m03","uid":"635dc90f-f541-4a77-89a1-07612efbc53a","resourceVersion":"1185","creationTimestamp":"2023-11-07T23:40:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:40:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:40:58Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1107 23:41:00.295581   33391 pod_ready.go:92] pod "kube-proxy-xwp5j" in "kube-system" namespace has status "Ready":"True"
	I1107 23:41:00.295599   33391 pod_ready.go:81] duration metric: took 398.789717ms waiting for pod "kube-proxy-xwp5j" in "kube-system" namespace to be "Ready" ...
	I1107 23:41:00.295607   33391 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:41:00.492032   33391 request.go:629] Waited for 196.368271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:41:00.492097   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553062
	I1107 23:41:00.492102   33391 round_trippers.go:469] Request Headers:
	I1107 23:41:00.492110   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:41:00.492117   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:41:00.495028   33391 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:41:00.495046   33391 round_trippers.go:577] Response Headers:
	I1107 23:41:00.495053   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:41:00.495059   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:41:00.495067   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:41:00.495075   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:41:00 GMT
	I1107 23:41:00.495087   33391 round_trippers.go:580]     Audit-Id: 47be9d21-44f8-4808-991d-9a965ea33c3f
	I1107 23:41:00.495099   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:41:00.495233   33391 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553062","namespace":"kube-system","uid":"334a75af-c6cb-45ac-a020-8afc3f4a4e7a","resourceVersion":"870","creationTimestamp":"2023-11-07T23:26:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.mirror":"101b31a45aab34f5dc66aed5e9e7cce1","kubernetes.io/config.seen":"2023-11-07T23:26:57.103265171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:26:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1107 23:41:00.692004   33391 request.go:629] Waited for 196.344147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:41:00.692082   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/multinode-553062
	I1107 23:41:00.692089   33391 round_trippers.go:469] Request Headers:
	I1107 23:41:00.692100   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:41:00.692113   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:41:00.695621   33391 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:41:00.695641   33391 round_trippers.go:577] Response Headers:
	I1107 23:41:00.695649   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:41:00 GMT
	I1107 23:41:00.695654   33391 round_trippers.go:580]     Audit-Id: 6e7abba1-61e8-4b93-8419-17fd1bea3a28
	I1107 23:41:00.695660   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:41:00.695665   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:41:00.695670   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:41:00.695675   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:41:00.695822   33391 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:26:53Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1107 23:41:00.696144   33391 pod_ready.go:92] pod "kube-scheduler-multinode-553062" in "kube-system" namespace has status "Ready":"True"
	I1107 23:41:00.696162   33391 pod_ready.go:81] duration metric: took 400.54833ms waiting for pod "kube-scheduler-multinode-553062" in "kube-system" namespace to be "Ready" ...
	I1107 23:41:00.696175   33391 pod_ready.go:38] duration metric: took 1.601932161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:41:00.696192   33391 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:41:00.696270   33391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:41:00.709077   33391 system_svc.go:56] duration metric: took 12.87797ms WaitForService to wait for kubelet.
	I1107 23:41:00.709101   33391 kubeadm.go:581] duration metric: took 1.635409721s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:41:00.709122   33391 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:41:00.891459   33391 request.go:629] Waited for 182.272239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I1107 23:41:00.891512   33391 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I1107 23:41:00.891518   33391 round_trippers.go:469] Request Headers:
	I1107 23:41:00.891530   33391 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:41:00.891543   33391 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:41:00.896704   33391 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 23:41:00.896721   33391 round_trippers.go:577] Response Headers:
	I1107 23:41:00.896728   33391 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:41:00 GMT
	I1107 23:41:00.896737   33391 round_trippers.go:580]     Audit-Id: 8af8a0bd-efcb-4f7c-a380-e043b4741a7a
	I1107 23:41:00.896742   33391 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:41:00.896751   33391 round_trippers.go:580]     Content-Type: application/json
	I1107 23:41:00.896758   33391 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d9c093c7-388b-4fba-8fb7-750e3f759a5d
	I1107 23:41:00.896763   33391 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c401c41d-2cc8-47c2-a0ea-a999e27ec5ea
	I1107 23:41:00.898066   33391 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1205"},"items":[{"metadata":{"name":"multinode-553062","uid":"582cb77a-d110-41b7-a1f6-c75f6b4ec7c0","resourceVersion":"878","creationTimestamp":"2023-11-07T23:26:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553062","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-553062","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_26_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15135 chars]
	I1107 23:41:00.898630   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:41:00.898649   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:41:00.898657   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:41:00.898663   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:41:00.898674   33391 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:41:00.898679   33391 node_conditions.go:123] node cpu capacity is 2
	I1107 23:41:00.898686   33391 node_conditions.go:105] duration metric: took 189.559181ms to run NodePressure ...
	I1107 23:41:00.898700   33391 start.go:228] waiting for startup goroutines ...
	I1107 23:41:00.898718   33391 start.go:242] writing updated cluster config ...
	I1107 23:41:00.898997   33391 ssh_runner.go:195] Run: rm -f paused
	I1107 23:41:00.948669   33391 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1107 23:41:00.951776   33391 out.go:177] * Done! kubectl is now configured to use "multinode-553062" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-07 23:36:52 UTC, ends at Tue 2023-11-07 23:41:02 UTC. --
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.002992815Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699400462002977382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0b53db7d-1a73-4454-ac4f-a711e12dd33c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.003723217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bc33808e-eedc-4832-b5fd-dbbbd1b7b517 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.003800665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bc33808e-eedc-4832-b5fd-dbbbd1b7b517 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.004204597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41a298b97aa004e2921cccc9708acc185aed0c460d6fc117f45eab1cb45de943,PodSandboxId:ce7366b2367fb602a3ec7f2fa3dbbeb8463b91680093ca075a212398de653393,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699400277714128856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cd3697c03cfabd77924b6935d181ce4da01b4905ed3bf889c28ca7782d28587,PodSandboxId:6e156c4e7fe9a7fba9afc0b5512be40f08af82bfe80b3d7678cc752878d5b6ca,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699400257500341420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-tvwc7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aba25d32-a9c1-4008-b112-3409cec0c411,},Annotations:map[string]string{io.kubernetes.container.hash: 27d64535,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94509ac714182fe0d1859a1c9a4c7f004142ba88453f5ce912fe0bb6b9e9038,PodSandboxId:836c209aed5832e5ca1c961611faabcabd5b35f3a40e146d2e4d1439e40927cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699400254003037982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6ggfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785c6064-d793-4959-8e34-28b4fc2549fc,},Annotations:map[string]string{io.kubernetes.container.hash: 128aa424,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f608c8d53c91999f21b5a3e23d77ee6628d761180ab6e5e4e5a398ca648446b,PodSandboxId:81346c42a53d7ce5e2502e67dad7846988212f07232c87cdb9ab7ad7ea23440e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699400248756511056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9stvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a9981d59-dbff-456f-9024-2754c2a9d0c6,},Annotations:map[string]string{io.kubernetes.container.hash: 14e7cd4d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76223c5b6d54d091af38fe7b61c35d957cfd8cf163cd8f04d58c5ab085ef1140,PodSandboxId:ce7366b2367fb602a3ec7f2fa3dbbeb8463b91680093ca075a212398de653393,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699400246934294072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40af2c6864b715c803779f18f916bc43115f63bd53e4a7dc3a7decfc1082466,PodSandboxId:14ee2ec2e3a38280f34b426118ef3a1ef36b68919f961ee3e9a4105591ac1b66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699400246346542872,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-944rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db20b1cf-b422-4649-a6e1-4549c4c5
6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 495a4f8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442ed15392776a8e2dd97f07453c846efcf4b7d148c2eee4fb6cbd921929efb5,PodSandboxId:77078e714b78bdab930e84381a1651e0ea89f14b5a5143c6a138438babdb53d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699400240098714907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101b31a45aab34f5dc66aed5e9e7cce1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca35498832b983b82af81f0b5a9ad09f36af667f5d1786302b5a5a42c0a71a,PodSandboxId:8872f79c6318f618f61912ac868ec67f522dbfbf9590e84b35eee78e3154058f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699400239616489695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82562fbdca14daeb385ae6968954f46,},Annotations:map[string]string{io.kubernetes.container.has
h: 25bdecaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8aadbaec1f598eba616cdaa9f65b513e9bac1ff9567c831ce2a74ea90cf9f44,PodSandboxId:9a91da8760d17a74b11ec87d89232acf68b8f9dba9cc1f838da48532fdd5fd25,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699400239661375237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf3161d745dce4ca9e35cf659a0b5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0265a9,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c6ef248f06a376fd4600a4acb35acf6c97a4b57fd14f0abaf9e6ed1f16bdc2,PodSandboxId:d781f32c14ced2e2e830dd5c8f48369c080f7940d0d387b42219e918ccd3b787,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699400239639047859,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6355e861fae0971467df802e2b4d8be6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bc33808e-eedc-4832-b5fd-dbbbd1b7b517 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.048727406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=48c62d8e-ac02-4799-9f35-55e3af22059c name=/runtime.v1.RuntimeService/Version
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.048812814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=48c62d8e-ac02-4799-9f35-55e3af22059c name=/runtime.v1.RuntimeService/Version
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.050231108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=01583b09-c9ae-4508-a74d-40491d5d51c3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.050611726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699400462050599820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=01583b09-c9ae-4508-a74d-40491d5d51c3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.051338391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c4a28aac-6cd4-420d-9e94-ad7d4433d1a7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.051411476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c4a28aac-6cd4-420d-9e94-ad7d4433d1a7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.051618114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41a298b97aa004e2921cccc9708acc185aed0c460d6fc117f45eab1cb45de943,PodSandboxId:ce7366b2367fb602a3ec7f2fa3dbbeb8463b91680093ca075a212398de653393,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699400277714128856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cd3697c03cfabd77924b6935d181ce4da01b4905ed3bf889c28ca7782d28587,PodSandboxId:6e156c4e7fe9a7fba9afc0b5512be40f08af82bfe80b3d7678cc752878d5b6ca,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699400257500341420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-tvwc7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aba25d32-a9c1-4008-b112-3409cec0c411,},Annotations:map[string]string{io.kubernetes.container.hash: 27d64535,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94509ac714182fe0d1859a1c9a4c7f004142ba88453f5ce912fe0bb6b9e9038,PodSandboxId:836c209aed5832e5ca1c961611faabcabd5b35f3a40e146d2e4d1439e40927cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699400254003037982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6ggfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785c6064-d793-4959-8e34-28b4fc2549fc,},Annotations:map[string]string{io.kubernetes.container.hash: 128aa424,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f608c8d53c91999f21b5a3e23d77ee6628d761180ab6e5e4e5a398ca648446b,PodSandboxId:81346c42a53d7ce5e2502e67dad7846988212f07232c87cdb9ab7ad7ea23440e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699400248756511056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9stvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a9981d59-dbff-456f-9024-2754c2a9d0c6,},Annotations:map[string]string{io.kubernetes.container.hash: 14e7cd4d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76223c5b6d54d091af38fe7b61c35d957cfd8cf163cd8f04d58c5ab085ef1140,PodSandboxId:ce7366b2367fb602a3ec7f2fa3dbbeb8463b91680093ca075a212398de653393,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699400246934294072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40af2c6864b715c803779f18f916bc43115f63bd53e4a7dc3a7decfc1082466,PodSandboxId:14ee2ec2e3a38280f34b426118ef3a1ef36b68919f961ee3e9a4105591ac1b66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699400246346542872,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-944rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db20b1cf-b422-4649-a6e1-4549c4c5
6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 495a4f8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442ed15392776a8e2dd97f07453c846efcf4b7d148c2eee4fb6cbd921929efb5,PodSandboxId:77078e714b78bdab930e84381a1651e0ea89f14b5a5143c6a138438babdb53d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699400240098714907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101b31a45aab34f5dc66aed5e9e7cce1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca35498832b983b82af81f0b5a9ad09f36af667f5d1786302b5a5a42c0a71a,PodSandboxId:8872f79c6318f618f61912ac868ec67f522dbfbf9590e84b35eee78e3154058f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699400239616489695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82562fbdca14daeb385ae6968954f46,},Annotations:map[string]string{io.kubernetes.container.has
h: 25bdecaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8aadbaec1f598eba616cdaa9f65b513e9bac1ff9567c831ce2a74ea90cf9f44,PodSandboxId:9a91da8760d17a74b11ec87d89232acf68b8f9dba9cc1f838da48532fdd5fd25,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699400239661375237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf3161d745dce4ca9e35cf659a0b5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0265a9,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c6ef248f06a376fd4600a4acb35acf6c97a4b57fd14f0abaf9e6ed1f16bdc2,PodSandboxId:d781f32c14ced2e2e830dd5c8f48369c080f7940d0d387b42219e918ccd3b787,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699400239639047859,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6355e861fae0971467df802e2b4d8be6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c4a28aac-6cd4-420d-9e94-ad7d4433d1a7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.094216430Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=723d8341-cd3f-49f3-8fd0-40f47d7a655d name=/runtime.v1.RuntimeService/Version
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.094269196Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=723d8341-cd3f-49f3-8fd0-40f47d7a655d name=/runtime.v1.RuntimeService/Version
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.095393485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5f344eb1-876c-4b71-a804-ef7bde81a3ce name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.095734291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699400462095723759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5f344eb1-876c-4b71-a804-ef7bde81a3ce name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.096264317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fb1e959c-8fe3-4fdc-b778-3e799283e24c name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.096339859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fb1e959c-8fe3-4fdc-b778-3e799283e24c name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.096622835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41a298b97aa004e2921cccc9708acc185aed0c460d6fc117f45eab1cb45de943,PodSandboxId:ce7366b2367fb602a3ec7f2fa3dbbeb8463b91680093ca075a212398de653393,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699400277714128856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cd3697c03cfabd77924b6935d181ce4da01b4905ed3bf889c28ca7782d28587,PodSandboxId:6e156c4e7fe9a7fba9afc0b5512be40f08af82bfe80b3d7678cc752878d5b6ca,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699400257500341420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-tvwc7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aba25d32-a9c1-4008-b112-3409cec0c411,},Annotations:map[string]string{io.kubernetes.container.hash: 27d64535,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94509ac714182fe0d1859a1c9a4c7f004142ba88453f5ce912fe0bb6b9e9038,PodSandboxId:836c209aed5832e5ca1c961611faabcabd5b35f3a40e146d2e4d1439e40927cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699400254003037982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6ggfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785c6064-d793-4959-8e34-28b4fc2549fc,},Annotations:map[string]string{io.kubernetes.container.hash: 128aa424,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f608c8d53c91999f21b5a3e23d77ee6628d761180ab6e5e4e5a398ca648446b,PodSandboxId:81346c42a53d7ce5e2502e67dad7846988212f07232c87cdb9ab7ad7ea23440e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699400248756511056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9stvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a9981d59-dbff-456f-9024-2754c2a9d0c6,},Annotations:map[string]string{io.kubernetes.container.hash: 14e7cd4d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76223c5b6d54d091af38fe7b61c35d957cfd8cf163cd8f04d58c5ab085ef1140,PodSandboxId:ce7366b2367fb602a3ec7f2fa3dbbeb8463b91680093ca075a212398de653393,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699400246934294072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40af2c6864b715c803779f18f916bc43115f63bd53e4a7dc3a7decfc1082466,PodSandboxId:14ee2ec2e3a38280f34b426118ef3a1ef36b68919f961ee3e9a4105591ac1b66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699400246346542872,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-944rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db20b1cf-b422-4649-a6e1-4549c4c5
6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 495a4f8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442ed15392776a8e2dd97f07453c846efcf4b7d148c2eee4fb6cbd921929efb5,PodSandboxId:77078e714b78bdab930e84381a1651e0ea89f14b5a5143c6a138438babdb53d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699400240098714907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101b31a45aab34f5dc66aed5e9e7cce1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca35498832b983b82af81f0b5a9ad09f36af667f5d1786302b5a5a42c0a71a,PodSandboxId:8872f79c6318f618f61912ac868ec67f522dbfbf9590e84b35eee78e3154058f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699400239616489695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82562fbdca14daeb385ae6968954f46,},Annotations:map[string]string{io.kubernetes.container.has
h: 25bdecaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8aadbaec1f598eba616cdaa9f65b513e9bac1ff9567c831ce2a74ea90cf9f44,PodSandboxId:9a91da8760d17a74b11ec87d89232acf68b8f9dba9cc1f838da48532fdd5fd25,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699400239661375237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf3161d745dce4ca9e35cf659a0b5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0265a9,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c6ef248f06a376fd4600a4acb35acf6c97a4b57fd14f0abaf9e6ed1f16bdc2,PodSandboxId:d781f32c14ced2e2e830dd5c8f48369c080f7940d0d387b42219e918ccd3b787,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699400239639047859,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6355e861fae0971467df802e2b4d8be6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fb1e959c-8fe3-4fdc-b778-3e799283e24c name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.136130483Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=223a1963-cb16-41f4-906f-4f079086b08c name=/runtime.v1.RuntimeService/Version
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.136212332Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=223a1963-cb16-41f4-906f-4f079086b08c name=/runtime.v1.RuntimeService/Version
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.137673956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c799a393-c9bf-4b8a-8f0f-07ad1d5677ae name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.138203465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699400462138189172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c799a393-c9bf-4b8a-8f0f-07ad1d5677ae name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.139359443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6fe436b5-400d-4016-94b5-e639845ba59d name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.139549099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6fe436b5-400d-4016-94b5-e639845ba59d name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:41:02 multinode-553062 crio[715]: time="2023-11-07 23:41:02.139814644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41a298b97aa004e2921cccc9708acc185aed0c460d6fc117f45eab1cb45de943,PodSandboxId:ce7366b2367fb602a3ec7f2fa3dbbeb8463b91680093ca075a212398de653393,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699400277714128856,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cd3697c03cfabd77924b6935d181ce4da01b4905ed3bf889c28ca7782d28587,PodSandboxId:6e156c4e7fe9a7fba9afc0b5512be40f08af82bfe80b3d7678cc752878d5b6ca,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1699400257500341420,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-tvwc7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aba25d32-a9c1-4008-b112-3409cec0c411,},Annotations:map[string]string{io.kubernetes.container.hash: 27d64535,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94509ac714182fe0d1859a1c9a4c7f004142ba88453f5ce912fe0bb6b9e9038,PodSandboxId:836c209aed5832e5ca1c961611faabcabd5b35f3a40e146d2e4d1439e40927cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699400254003037982,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6ggfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785c6064-d793-4959-8e34-28b4fc2549fc,},Annotations:map[string]string{io.kubernetes.container.hash: 128aa424,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f608c8d53c91999f21b5a3e23d77ee6628d761180ab6e5e4e5a398ca648446b,PodSandboxId:81346c42a53d7ce5e2502e67dad7846988212f07232c87cdb9ab7ad7ea23440e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1699400248756511056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9stvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: a9981d59-dbff-456f-9024-2754c2a9d0c6,},Annotations:map[string]string{io.kubernetes.container.hash: 14e7cd4d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76223c5b6d54d091af38fe7b61c35d957cfd8cf163cd8f04d58c5ab085ef1140,PodSandboxId:ce7366b2367fb602a3ec7f2fa3dbbeb8463b91680093ca075a212398de653393,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1699400246934294072,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 85179396-d02a-404a-a93e-e10db8c673b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6ef05ac5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a40af2c6864b715c803779f18f916bc43115f63bd53e4a7dc3a7decfc1082466,PodSandboxId:14ee2ec2e3a38280f34b426118ef3a1ef36b68919f961ee3e9a4105591ac1b66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699400246346542872,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-944rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db20b1cf-b422-4649-a6e1-4549c4c5
6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 495a4f8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442ed15392776a8e2dd97f07453c846efcf4b7d148c2eee4fb6cbd921929efb5,PodSandboxId:77078e714b78bdab930e84381a1651e0ea89f14b5a5143c6a138438babdb53d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699400240098714907,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 101b31a45aab34f5dc66aed5e9e7cce1,},Annot
ations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ca35498832b983b82af81f0b5a9ad09f36af667f5d1786302b5a5a42c0a71a,PodSandboxId:8872f79c6318f618f61912ac868ec67f522dbfbf9590e84b35eee78e3154058f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699400239616489695,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82562fbdca14daeb385ae6968954f46,},Annotations:map[string]string{io.kubernetes.container.has
h: 25bdecaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8aadbaec1f598eba616cdaa9f65b513e9bac1ff9567c831ce2a74ea90cf9f44,PodSandboxId:9a91da8760d17a74b11ec87d89232acf68b8f9dba9cc1f838da48532fdd5fd25,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699400239661375237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf3161d745dce4ca9e35cf659a0b5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0265a9,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c6ef248f06a376fd4600a4acb35acf6c97a4b57fd14f0abaf9e6ed1f16bdc2,PodSandboxId:d781f32c14ced2e2e830dd5c8f48369c080f7940d0d387b42219e918ccd3b787,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699400239639047859,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6355e861fae0971467df802e2b4d8be6,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6fe436b5-400d-4016-94b5-e639845ba59d name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	41a298b97aa00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   ce7366b2367fb       storage-provisioner
	4cd3697c03cfa       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   6e156c4e7fe9a       busybox-5bc68d56bd-tvwc7
	c94509ac71418       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   836c209aed583       coredns-5dd5756b68-6ggfr
	1f608c8d53c91       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   81346c42a53d7       kindnet-9stvx
	76223c5b6d54d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   ce7366b2367fb       storage-provisioner
	a40af2c6864b7       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      3 minutes ago       Running             kube-proxy                1                   14ee2ec2e3a38       kube-proxy-944rz
	442ed15392776       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      3 minutes ago       Running             kube-scheduler            1                   77078e714b78b       kube-scheduler-multinode-553062
	d8aadbaec1f59       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      3 minutes ago       Running             kube-apiserver            1                   9a91da8760d17       kube-apiserver-multinode-553062
	71c6ef248f06a       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      3 minutes ago       Running             kube-controller-manager   1                   d781f32c14ced       kube-controller-manager-multinode-553062
	37ca35498832b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   8872f79c6318f       etcd-multinode-553062
	
	* 
	* ==> coredns [c94509ac714182fe0d1859a1c9a4c7f004142ba88453f5ce912fe0bb6b9e9038] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52732 - 11658 "HINFO IN 5597181203743005147.1871396532716646277. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009941097s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-553062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=multinode-553062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_26_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:26:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-553062
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:40:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:37:55 +0000   Tue, 07 Nov 2023 23:26:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:37:55 +0000   Tue, 07 Nov 2023 23:26:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:37:55 +0000   Tue, 07 Nov 2023 23:26:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:37:55 +0000   Tue, 07 Nov 2023 23:37:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    multinode-553062
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 506f0e1682cc46079bf3cb06bd687e61
	  System UUID:                506f0e16-82cc-4607-9bf3-cb06bd687e61
	  Boot ID:                    131c8527-5702-401b-9516-b6982ee7acbc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-tvwc7                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-6ggfr                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-553062                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-9stvx                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-553062             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-553062    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-944rz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-553062             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m35s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-553062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-553062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-553062 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-553062 event: Registered Node multinode-553062 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-553062 status is now: NodeReady
	  Normal  Starting                 3m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s (x8 over 3m44s)  kubelet          Node multinode-553062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x8 over 3m44s)  kubelet          Node multinode-553062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x7 over 3m44s)  kubelet          Node multinode-553062 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m26s                  node-controller  Node multinode-553062 event: Registered Node multinode-553062 in Controller
	
	
	Name:               multinode-553062-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553062-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:39:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-553062-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:40:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:39:17 +0000   Tue, 07 Nov 2023 23:39:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:39:17 +0000   Tue, 07 Nov 2023 23:39:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:39:17 +0000   Tue, 07 Nov 2023 23:39:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:39:17 +0000   Tue, 07 Nov 2023 23:39:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    multinode-553062-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c575b648ec8840ffb6a9a6591e7501d2
	  System UUID:                c575b648-ec88-40ff-b6a9-a6591e7501d2
	  Boot ID:                    e163bf6e-df20-4938-8ce1-efd885f93d4c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nnk6g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-4v85d               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-rktlk            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 103s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeReady                13m                    kubelet          Node multinode-553062-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m46s                  kubelet          Node multinode-553062-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m13s (x2 over 3m13s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeHasSufficientPID     106s (x6 over 13m)     kubelet          Node multinode-553062-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    106s (x6 over 13m)     kubelet          Node multinode-553062-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  106s (x6 over 13m)     kubelet          Node multinode-553062-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 105s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x2 over 105s)    kubelet          Node multinode-553062-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x2 over 105s)    kubelet          Node multinode-553062-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x2 over 105s)    kubelet          Node multinode-553062-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                105s                   kubelet          Node multinode-553062-m02 status is now: NodeReady
	  Normal   RegisteredNode           101s                   node-controller  Node multinode-553062-m02 event: Registered Node multinode-553062-m02 in Controller
	
	
	Name:               multinode-553062-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553062-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:40:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-553062-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:40:58 +0000   Tue, 07 Nov 2023 23:40:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:40:58 +0000   Tue, 07 Nov 2023 23:40:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:40:58 +0000   Tue, 07 Nov 2023 23:40:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:40:58 +0000   Tue, 07 Nov 2023 23:40:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    multinode-553062-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 df65cc4b314c4c11b1185280f2ba96e3
	  System UUID:                df65cc4b-314c-4c11-b118-5280f2ba96e3
	  Boot ID:                    b1d23194-077d-47ed-a9da-beb4a1b043f4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x55ww    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-g8624               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-xwp5j            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 2s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-553062-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-553062-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-553062-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-553062-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-553062-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-553062-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-553062-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-553062-m03 status is now: NodeReady
	  Normal   NodeNotReady             65s                kubelet     Node multinode-553062-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        34s (x2 over 94s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    4s (x2 over 4s)    kubelet     Node multinode-553062-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s (x2 over 4s)    kubelet     Node multinode-553062-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                4s                 kubelet     Node multinode-553062-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  4s (x2 over 4s)    kubelet     Node multinode-553062-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Nov 7 23:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066365] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.392493] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.388743] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149473] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.597520] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov 7 23:37] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.108674] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.147817] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.106614] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.208182] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +16.819017] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [37ca35498832b983b82af81f0b5a9ad09f36af667f5d1786302b5a5a42c0a71a] <==
	* {"level":"info","ts":"2023-11-07T23:37:21.55443Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-07T23:37:21.554456Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-07T23:37:21.554735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 switched to configuration voters=(12797353184818830436)"}
	{"level":"info","ts":"2023-11-07T23:37:21.554809Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7954d586cad9e091","local-member-id":"b19954eb16571c64","added-peer-id":"b19954eb16571c64","added-peer-peer-urls":["https://192.168.39.246:2380"]}
	{"level":"info","ts":"2023-11-07T23:37:21.555048Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7954d586cad9e091","local-member-id":"b19954eb16571c64","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:37:21.555112Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:37:21.560153Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-07T23:37:21.561035Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b19954eb16571c64","initial-advertise-peer-urls":["https://192.168.39.246:2380"],"listen-peer-urls":["https://192.168.39.246:2380"],"advertise-client-urls":["https://192.168.39.246:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.246:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-07T23:37:21.561145Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-07T23:37:21.561303Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2023-11-07T23:37:21.561338Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2023-11-07T23:37:23.22415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-07T23:37:23.224251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-07T23:37:23.224309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 received MsgPreVoteResp from b19954eb16571c64 at term 2"}
	{"level":"info","ts":"2023-11-07T23:37:23.224347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 became candidate at term 3"}
	{"level":"info","ts":"2023-11-07T23:37:23.224371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 received MsgVoteResp from b19954eb16571c64 at term 3"}
	{"level":"info","ts":"2023-11-07T23:37:23.224398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 became leader at term 3"}
	{"level":"info","ts":"2023-11-07T23:37:23.224423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b19954eb16571c64 elected leader b19954eb16571c64 at term 3"}
	{"level":"info","ts":"2023-11-07T23:37:23.227798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:37:23.228762Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.246:2379"}
	{"level":"info","ts":"2023-11-07T23:37:23.227738Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b19954eb16571c64","local-member-attributes":"{Name:multinode-553062 ClientURLs:[https://192.168.39.246:2379]}","request-path":"/0/members/b19954eb16571c64/attributes","cluster-id":"7954d586cad9e091","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-07T23:37:23.229303Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:37:23.229631Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-07T23:37:23.229683Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-07T23:37:23.230424Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  23:41:02 up 4 min,  0 users,  load average: 0.11, 0.23, 0.11
	Linux multinode-553062 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [1f608c8d53c91999f21b5a3e23d77ee6628d761180ab6e5e4e5a398ca648446b] <==
	* I1107 23:40:30.500781       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:40:30.500888       1 main.go:227] handling current node
	I1107 23:40:30.501027       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I1107 23:40:30.501082       1 main.go:250] Node multinode-553062-m02 has CIDR [10.244.1.0/24] 
	I1107 23:40:30.501255       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I1107 23:40:30.501294       1 main.go:250] Node multinode-553062-m03 has CIDR [10.244.3.0/24] 
	I1107 23:40:40.506409       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:40:40.506501       1 main.go:227] handling current node
	I1107 23:40:40.506543       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I1107 23:40:40.506561       1 main.go:250] Node multinode-553062-m02 has CIDR [10.244.1.0/24] 
	I1107 23:40:40.506683       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I1107 23:40:40.506704       1 main.go:250] Node multinode-553062-m03 has CIDR [10.244.3.0/24] 
	I1107 23:40:50.512594       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:40:50.512690       1 main.go:227] handling current node
	I1107 23:40:50.512716       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I1107 23:40:50.512735       1 main.go:250] Node multinode-553062-m02 has CIDR [10.244.1.0/24] 
	I1107 23:40:50.513039       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I1107 23:40:50.513081       1 main.go:250] Node multinode-553062-m03 has CIDR [10.244.3.0/24] 
	I1107 23:41:00.526066       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I1107 23:41:00.526173       1 main.go:227] handling current node
	I1107 23:41:00.526198       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I1107 23:41:00.526215       1 main.go:250] Node multinode-553062-m02 has CIDR [10.244.1.0/24] 
	I1107 23:41:00.526337       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I1107 23:41:00.526358       1 main.go:250] Node multinode-553062-m03 has CIDR [10.244.2.0/24] 
	I1107 23:41:00.526415       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.201 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [d8aadbaec1f598eba616cdaa9f65b513e9bac1ff9567c831ce2a74ea90cf9f44] <==
	* I1107 23:37:24.522336       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1107 23:37:24.522441       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I1107 23:37:24.521864       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I1107 23:37:24.522229       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1107 23:37:24.671720       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1107 23:37:24.673256       1 shared_informer.go:318] Caches are synced for configmaps
	I1107 23:37:24.673508       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 23:37:24.679791       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 23:37:24.693493       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1107 23:37:24.702659       1 aggregator.go:166] initial CRD sync complete...
	I1107 23:37:24.702719       1 autoregister_controller.go:141] Starting autoregister controller
	I1107 23:37:24.702748       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1107 23:37:24.702772       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:37:24.712411       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:37:24.722758       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1107 23:37:24.722992       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1107 23:37:24.748786       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1107 23:37:24.787134       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1107 23:37:25.553542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 23:37:27.403153       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1107 23:37:27.542271       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1107 23:37:27.551728       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1107 23:37:27.632611       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:37:27.641145       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 23:38:15.248617       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [71c6ef248f06a376fd4600a4acb35acf6c97a4b57fd14f0abaf9e6ed1f16bdc2] <==
	* I1107 23:39:17.341176       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-z67r2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-z67r2"
	I1107 23:39:17.361614       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-553062-m02" podCIDRs=["10.244.1.0/24"]
	I1107 23:39:17.414227       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553062-m03"
	I1107 23:39:18.053411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.830157ms"
	I1107 23:39:18.053774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="213.69µs"
	I1107 23:39:18.219753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.998µs"
	I1107 23:39:21.974565       1 event.go:307] "Event occurred" object="multinode-553062-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-553062-m02 event: Registered Node multinode-553062-m02 in Controller"
	I1107 23:39:29.489236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="150.023µs"
	I1107 23:39:30.097226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="139.621µs"
	I1107 23:39:30.102862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.109µs"
	I1107 23:39:57.812725       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553062-m02"
	I1107 23:40:54.322267       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-nnk6g"
	I1107 23:40:54.333464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="30.086734ms"
	I1107 23:40:54.348530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="14.98206ms"
	I1107 23:40:54.349366       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="98.496µs"
	I1107 23:40:54.365051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="144.099µs"
	I1107 23:40:55.361301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.058526ms"
	I1107 23:40:55.361554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="47.823µs"
	I1107 23:40:57.327307       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553062-m02"
	I1107 23:40:58.047664       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-553062-m03\" does not exist"
	I1107 23:40:58.049481       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553062-m02"
	I1107 23:40:58.050063       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-x55ww" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-x55ww"
	I1107 23:40:58.063854       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-553062-m03" podCIDRs=["10.244.2.0/24"]
	I1107 23:40:58.085798       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553062-m02"
	I1107 23:40:58.947283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.622µs"
	
	* 
	* ==> kube-proxy [a40af2c6864b715c803779f18f916bc43115f63bd53e4a7dc3a7decfc1082466] <==
	* I1107 23:37:26.898367       1 server_others.go:69] "Using iptables proxy"
	I1107 23:37:26.913278       1 node.go:141] Successfully retrieved node IP: 192.168.39.246
	I1107 23:37:26.991429       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1107 23:37:26.991497       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1107 23:37:26.994259       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:37:26.994334       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:37:26.994479       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:37:26.994599       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:37:26.995355       1 config.go:188] "Starting service config controller"
	I1107 23:37:26.995422       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:37:26.995543       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:37:26.995569       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:37:26.996098       1 config.go:315] "Starting node config controller"
	I1107 23:37:26.997552       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:37:27.096626       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1107 23:37:27.096690       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:37:27.098155       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [442ed15392776a8e2dd97f07453c846efcf4b7d148c2eee4fb6cbd921929efb5] <==
	* I1107 23:37:22.072152       1 serving.go:348] Generated self-signed cert in-memory
	W1107 23:37:24.597403       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 23:37:24.597456       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 23:37:24.597468       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 23:37:24.597474       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 23:37:24.688619       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1107 23:37:24.688856       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:37:24.708613       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 23:37:24.708717       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 23:37:24.716243       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1107 23:37:24.716342       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 23:37:24.810028       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-07 23:36:52 UTC, ends at Tue 2023-11-07 23:41:02 UTC. --
	Nov 07 23:37:27 multinode-553062 kubelet[921]: E1107 23:37:27.124404     921 projected.go:198] Error preparing data for projected volume kube-api-access-jm9mb for pod default/busybox-5bc68d56bd-tvwc7: object "default"/"kube-root-ca.crt" not registered
	Nov 07 23:37:27 multinode-553062 kubelet[921]: E1107 23:37:27.124505     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aba25d32-a9c1-4008-b112-3409cec0c411-kube-api-access-jm9mb podName:aba25d32-a9c1-4008-b112-3409cec0c411 nodeName:}" failed. No retries permitted until 2023-11-07 23:37:29.124488378 +0000 UTC m=+10.893764317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jm9mb" (UniqueName: "kubernetes.io/projected/aba25d32-a9c1-4008-b112-3409cec0c411-kube-api-access-jm9mb") pod "busybox-5bc68d56bd-tvwc7" (UID: "aba25d32-a9c1-4008-b112-3409cec0c411") : object "default"/"kube-root-ca.crt" not registered
	Nov 07 23:37:27 multinode-553062 kubelet[921]: E1107 23:37:27.489527     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-tvwc7" podUID="aba25d32-a9c1-4008-b112-3409cec0c411"
	Nov 07 23:37:27 multinode-553062 kubelet[921]: E1107 23:37:27.489624     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-6ggfr" podUID="785c6064-d793-4959-8e34-28b4fc2549fc"
	Nov 07 23:37:29 multinode-553062 kubelet[921]: E1107 23:37:29.039133     921 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 07 23:37:29 multinode-553062 kubelet[921]: E1107 23:37:29.039199     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/785c6064-d793-4959-8e34-28b4fc2549fc-config-volume podName:785c6064-d793-4959-8e34-28b4fc2549fc nodeName:}" failed. No retries permitted until 2023-11-07 23:37:33.03918593 +0000 UTC m=+14.808461863 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/785c6064-d793-4959-8e34-28b4fc2549fc-config-volume") pod "coredns-5dd5756b68-6ggfr" (UID: "785c6064-d793-4959-8e34-28b4fc2549fc") : object "kube-system"/"coredns" not registered
	Nov 07 23:37:29 multinode-553062 kubelet[921]: E1107 23:37:29.139615     921 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 07 23:37:29 multinode-553062 kubelet[921]: E1107 23:37:29.139673     921 projected.go:198] Error preparing data for projected volume kube-api-access-jm9mb for pod default/busybox-5bc68d56bd-tvwc7: object "default"/"kube-root-ca.crt" not registered
	Nov 07 23:37:29 multinode-553062 kubelet[921]: E1107 23:37:29.139726     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aba25d32-a9c1-4008-b112-3409cec0c411-kube-api-access-jm9mb podName:aba25d32-a9c1-4008-b112-3409cec0c411 nodeName:}" failed. No retries permitted until 2023-11-07 23:37:33.139712159 +0000 UTC m=+14.908988091 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jm9mb" (UniqueName: "kubernetes.io/projected/aba25d32-a9c1-4008-b112-3409cec0c411-kube-api-access-jm9mb") pod "busybox-5bc68d56bd-tvwc7" (UID: "aba25d32-a9c1-4008-b112-3409cec0c411") : object "default"/"kube-root-ca.crt" not registered
	Nov 07 23:37:29 multinode-553062 kubelet[921]: E1107 23:37:29.489548     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-tvwc7" podUID="aba25d32-a9c1-4008-b112-3409cec0c411"
	Nov 07 23:37:29 multinode-553062 kubelet[921]: E1107 23:37:29.489710     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-6ggfr" podUID="785c6064-d793-4959-8e34-28b4fc2549fc"
	Nov 07 23:37:30 multinode-553062 kubelet[921]: I1107 23:37:30.542022     921 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 07 23:37:57 multinode-553062 kubelet[921]: I1107 23:37:57.685353     921 scope.go:117] "RemoveContainer" containerID="76223c5b6d54d091af38fe7b61c35d957cfd8cf163cd8f04d58c5ab085ef1140"
	Nov 07 23:38:18 multinode-553062 kubelet[921]: E1107 23:38:18.510866     921 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 07 23:38:18 multinode-553062 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 07 23:38:18 multinode-553062 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 07 23:38:18 multinode-553062 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 07 23:39:18 multinode-553062 kubelet[921]: E1107 23:39:18.513431     921 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 07 23:39:18 multinode-553062 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 07 23:39:18 multinode-553062 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 07 23:39:18 multinode-553062 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 07 23:40:18 multinode-553062 kubelet[921]: E1107 23:40:18.512684     921 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 07 23:40:18 multinode-553062 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 07 23:40:18 multinode-553062 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 07 23:40:18 multinode-553062 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-553062 -n multinode-553062
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-553062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (682.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553062 stop: exit status 82 (2m1.214525472s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-553062"  ...
	* Stopping node "multinode-553062"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-553062 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553062 status: exit status 3 (18.770837812s)

                                                
                                                
-- stdout --
	multinode-553062
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-553062-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 23:43:25.313130   35704 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E1107 23:43:25.313168   35704 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-553062 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-553062 -n multinode-553062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-553062 -n multinode-553062: exit status 3 (3.17003167s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 23:43:28.641148   35803 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E1107 23:43:28.641169   35803 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-553062" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.16s)

                                                
                                    
x
+
TestPreload (250.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-197747 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1107 23:53:45.484083   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:53:53.871743   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-197747 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m32.541845573s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-197747 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-197747 image pull gcr.io/k8s-minikube/busybox: (2.854859704s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-197747
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-197747: (8.124719872s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-197747 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1107 23:55:38.956337   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:55:42.434077   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-197747 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.354137238s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-197747 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2023-11-07 23:55:53.066423539 +0000 UTC m=+3287.229732418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-197747 -n test-preload-197747
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-197747 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-197747 logs -n 25: (1.125901085s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n multinode-553062 sudo cat                                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | /home/docker/cp-test_multinode-553062-m03_multinode-553062.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-553062 cp multinode-553062-m03:/home/docker/cp-test.txt                       | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m02:/home/docker/cp-test_multinode-553062-m03_multinode-553062-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n                                                                 | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | multinode-553062-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-553062 ssh -n multinode-553062-m02 sudo cat                                   | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | /home/docker/cp-test_multinode-553062-m03_multinode-553062-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-553062 node stop m03                                                          | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	| node    | multinode-553062 node start                                                             | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC | 07 Nov 23 23:29 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-553062                                                                | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC |                     |
	| stop    | -p multinode-553062                                                                     | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:29 UTC |                     |
	| start   | -p multinode-553062                                                                     | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:31 UTC | 07 Nov 23 23:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-553062                                                                | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:41 UTC |                     |
	| node    | multinode-553062 node delete                                                            | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:41 UTC | 07 Nov 23 23:41 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-553062 stop                                                                   | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:41 UTC |                     |
	| start   | -p multinode-553062                                                                     | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:43 UTC | 07 Nov 23 23:50 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-553062                                                                | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC |                     |
	| start   | -p multinode-553062-m02                                                                 | multinode-553062-m02 | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-553062-m03                                                                 | multinode-553062-m03 | jenkins | v1.32.0 | 07 Nov 23 23:50 UTC | 07 Nov 23 23:51 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-553062                                                                 | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:51 UTC |                     |
	| delete  | -p multinode-553062-m03                                                                 | multinode-553062-m03 | jenkins | v1.32.0 | 07 Nov 23 23:51 UTC | 07 Nov 23 23:51 UTC |
	| delete  | -p multinode-553062                                                                     | multinode-553062     | jenkins | v1.32.0 | 07 Nov 23 23:51 UTC | 07 Nov 23 23:51 UTC |
	| start   | -p test-preload-197747                                                                  | test-preload-197747  | jenkins | v1.32.0 | 07 Nov 23 23:51 UTC | 07 Nov 23 23:54 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-197747 image pull                                                          | test-preload-197747  | jenkins | v1.32.0 | 07 Nov 23 23:54 UTC | 07 Nov 23 23:54 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-197747                                                                  | test-preload-197747  | jenkins | v1.32.0 | 07 Nov 23 23:54 UTC | 07 Nov 23 23:54 UTC |
	| start   | -p test-preload-197747                                                                  | test-preload-197747  | jenkins | v1.32.0 | 07 Nov 23 23:54 UTC | 07 Nov 23 23:55 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-197747 image list                                                          | test-preload-197747  | jenkins | v1.32.0 | 07 Nov 23 23:55 UTC | 07 Nov 23 23:55 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:54:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:54:29.529893   38629 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:54:29.530054   38629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:54:29.530064   38629 out.go:309] Setting ErrFile to fd 2...
	I1107 23:54:29.530069   38629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:54:29.530244   38629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1107 23:54:29.530784   38629 out.go:303] Setting JSON to false
	I1107 23:54:29.533631   38629 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5819,"bootTime":1699395451,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:54:29.533690   38629 start.go:138] virtualization: kvm guest
	I1107 23:54:29.535864   38629 out.go:177] * [test-preload-197747] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:54:29.537432   38629 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:54:29.537477   38629 notify.go:220] Checking for updates...
	I1107 23:54:29.538964   38629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:54:29.540305   38629 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:54:29.541477   38629 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:54:29.542719   38629 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:54:29.543902   38629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:54:29.545439   38629 config.go:182] Loaded profile config "test-preload-197747": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1107 23:54:29.545846   38629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:54:29.545921   38629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:54:29.559381   38629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I1107 23:54:29.559742   38629 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:54:29.560237   38629 main.go:141] libmachine: Using API Version  1
	I1107 23:54:29.560256   38629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:54:29.560559   38629 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:54:29.560752   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:54:29.562597   38629 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1107 23:54:29.563774   38629 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:54:29.564148   38629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:54:29.564189   38629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:54:29.577731   38629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I1107 23:54:29.578114   38629 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:54:29.578598   38629 main.go:141] libmachine: Using API Version  1
	I1107 23:54:29.578632   38629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:54:29.578931   38629 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:54:29.579083   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:54:29.612337   38629 out.go:177] * Using the kvm2 driver based on existing profile
	I1107 23:54:29.613947   38629 start.go:298] selected driver: kvm2
	I1107 23:54:29.613963   38629 start.go:902] validating driver "kvm2" against &{Name:test-preload-197747 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload
-197747 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:54:29.614070   38629 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:54:29.614738   38629 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:54:29.614817   38629 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:54:29.628575   38629 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:54:29.628905   38629 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:54:29.628949   38629 cni.go:84] Creating CNI manager for ""
	I1107 23:54:29.628964   38629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:54:29.628979   38629 start_flags.go:323] config:
	{Name:test-preload-197747 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-197747 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:54:29.629155   38629 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:54:29.631807   38629 out.go:177] * Starting control plane node test-preload-197747 in cluster test-preload-197747
	I1107 23:54:29.633204   38629 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1107 23:54:30.110622   38629 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1107 23:54:30.110663   38629 cache.go:56] Caching tarball of preloaded images
	I1107 23:54:30.110835   38629 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1107 23:54:30.112860   38629 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1107 23:54:30.114401   38629 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:54:30.229633   38629 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1107 23:54:43.208761   38629 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:54:43.208889   38629 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:54:44.105828   38629 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I1107 23:54:44.105991   38629 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/config.json ...
	I1107 23:54:44.106223   38629 start.go:365] acquiring machines lock for test-preload-197747: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:54:44.106303   38629 start.go:369] acquired machines lock for "test-preload-197747" in 50.51µs
	I1107 23:54:44.106323   38629 start.go:96] Skipping create...Using existing machine configuration
	I1107 23:54:44.106334   38629 fix.go:54] fixHost starting: 
	I1107 23:54:44.106615   38629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:54:44.106665   38629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:54:44.120156   38629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I1107 23:54:44.120568   38629 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:54:44.121080   38629 main.go:141] libmachine: Using API Version  1
	I1107 23:54:44.121109   38629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:54:44.121454   38629 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:54:44.121652   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:54:44.121825   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetState
	I1107 23:54:44.123361   38629 fix.go:102] recreateIfNeeded on test-preload-197747: state=Stopped err=<nil>
	I1107 23:54:44.123388   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	W1107 23:54:44.123637   38629 fix.go:128] unexpected machine state, will restart: <nil>
	I1107 23:54:44.126504   38629 out.go:177] * Restarting existing kvm2 VM for "test-preload-197747" ...
	I1107 23:54:44.127822   38629 main.go:141] libmachine: (test-preload-197747) Calling .Start
	I1107 23:54:44.127990   38629 main.go:141] libmachine: (test-preload-197747) Ensuring networks are active...
	I1107 23:54:44.128610   38629 main.go:141] libmachine: (test-preload-197747) Ensuring network default is active
	I1107 23:54:44.128925   38629 main.go:141] libmachine: (test-preload-197747) Ensuring network mk-test-preload-197747 is active
	I1107 23:54:44.129247   38629 main.go:141] libmachine: (test-preload-197747) Getting domain xml...
	I1107 23:54:44.129980   38629 main.go:141] libmachine: (test-preload-197747) Creating domain...
	I1107 23:54:45.346351   38629 main.go:141] libmachine: (test-preload-197747) Waiting to get IP...
	I1107 23:54:45.347190   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:45.347647   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:45.347738   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:45.347625   38704 retry.go:31] will retry after 304.099115ms: waiting for machine to come up
	I1107 23:54:45.653150   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:45.653603   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:45.653634   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:45.653578   38704 retry.go:31] will retry after 267.057061ms: waiting for machine to come up
	I1107 23:54:45.921888   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:45.922233   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:45.922355   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:45.922190   38704 retry.go:31] will retry after 364.83163ms: waiting for machine to come up
	I1107 23:54:46.288725   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:46.289109   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:46.289139   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:46.289053   38704 retry.go:31] will retry after 393.218207ms: waiting for machine to come up
	I1107 23:54:46.683537   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:46.683921   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:46.683944   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:46.683864   38704 retry.go:31] will retry after 521.600207ms: waiting for machine to come up
	I1107 23:54:47.207439   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:47.207741   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:47.207770   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:47.207691   38704 retry.go:31] will retry after 674.291746ms: waiting for machine to come up
	I1107 23:54:47.883362   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:47.883717   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:47.883746   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:47.883672   38704 retry.go:31] will retry after 780.107794ms: waiting for machine to come up
	I1107 23:54:48.665503   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:48.665960   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:48.665985   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:48.665926   38704 retry.go:31] will retry after 945.683882ms: waiting for machine to come up
	I1107 23:54:49.613171   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:49.613517   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:49.613547   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:49.613460   38704 retry.go:31] will retry after 1.832274178s: waiting for machine to come up
	I1107 23:54:51.448319   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:51.448694   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:51.448721   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:51.448641   38704 retry.go:31] will retry after 1.776985529s: waiting for machine to come up
	I1107 23:54:53.227677   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:53.228274   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:53.228306   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:53.228210   38704 retry.go:31] will retry after 1.935338756s: waiting for machine to come up
	I1107 23:54:55.164883   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:55.165355   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:55.165392   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:55.165299   38704 retry.go:31] will retry after 3.410920305s: waiting for machine to come up
	I1107 23:54:58.579778   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:54:58.580141   38629 main.go:141] libmachine: (test-preload-197747) DBG | unable to find current IP address of domain test-preload-197747 in network mk-test-preload-197747
	I1107 23:54:58.580173   38629 main.go:141] libmachine: (test-preload-197747) DBG | I1107 23:54:58.580094   38704 retry.go:31] will retry after 4.435868777s: waiting for machine to come up
	I1107 23:55:03.020852   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.021270   38629 main.go:141] libmachine: (test-preload-197747) Found IP for machine: 192.168.39.173
	I1107 23:55:03.021292   38629 main.go:141] libmachine: (test-preload-197747) Reserving static IP address...
	I1107 23:55:03.021308   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has current primary IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.021616   38629 main.go:141] libmachine: (test-preload-197747) Reserved static IP address: 192.168.39.173
	I1107 23:55:03.021653   38629 main.go:141] libmachine: (test-preload-197747) Waiting for SSH to be available...
	I1107 23:55:03.021677   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "test-preload-197747", mac: "52:54:00:1c:4b:79", ip: "192.168.39.173"} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.021701   38629 main.go:141] libmachine: (test-preload-197747) DBG | skip adding static IP to network mk-test-preload-197747 - found existing host DHCP lease matching {name: "test-preload-197747", mac: "52:54:00:1c:4b:79", ip: "192.168.39.173"}
	I1107 23:55:03.021721   38629 main.go:141] libmachine: (test-preload-197747) DBG | Getting to WaitForSSH function...
	I1107 23:55:03.023555   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.023877   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.023903   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.024033   38629 main.go:141] libmachine: (test-preload-197747) DBG | Using SSH client type: external
	I1107 23:55:03.024061   38629 main.go:141] libmachine: (test-preload-197747) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/test-preload-197747/id_rsa (-rw-------)
	I1107 23:55:03.024099   38629 main.go:141] libmachine: (test-preload-197747) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/test-preload-197747/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1107 23:55:03.024120   38629 main.go:141] libmachine: (test-preload-197747) DBG | About to run SSH command:
	I1107 23:55:03.024137   38629 main.go:141] libmachine: (test-preload-197747) DBG | exit 0
	I1107 23:55:03.120714   38629 main.go:141] libmachine: (test-preload-197747) DBG | SSH cmd err, output: <nil>: 
	I1107 23:55:03.121098   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetConfigRaw
	I1107 23:55:03.121773   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetIP
	I1107 23:55:03.124111   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.124386   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.124412   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.124699   38629 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/config.json ...
	I1107 23:55:03.124878   38629 machine.go:88] provisioning docker machine ...
	I1107 23:55:03.124894   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:55:03.125088   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetMachineName
	I1107 23:55:03.125266   38629 buildroot.go:166] provisioning hostname "test-preload-197747"
	I1107 23:55:03.125283   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetMachineName
	I1107 23:55:03.125436   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:03.127256   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.127560   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.127591   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.127704   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:03.127852   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:03.127984   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:03.128087   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:03.128199   38629 main.go:141] libmachine: Using SSH client type: native
	I1107 23:55:03.128541   38629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I1107 23:55:03.128555   38629 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-197747 && echo "test-preload-197747" | sudo tee /etc/hostname
	I1107 23:55:03.268685   38629 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-197747
	
	I1107 23:55:03.268715   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:03.271573   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.271916   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.271959   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.272111   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:03.272299   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:03.272552   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:03.272684   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:03.272874   38629 main.go:141] libmachine: Using SSH client type: native
	I1107 23:55:03.273189   38629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I1107 23:55:03.273206   38629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-197747' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-197747/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-197747' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:55:03.408339   38629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:55:03.408364   38629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1107 23:55:03.408381   38629 buildroot.go:174] setting up certificates
	I1107 23:55:03.408391   38629 provision.go:83] configureAuth start
	I1107 23:55:03.408399   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetMachineName
	I1107 23:55:03.408718   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetIP
	I1107 23:55:03.411360   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.411663   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.411688   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.411795   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:03.413761   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.414080   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.414112   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.414233   38629 provision.go:138] copyHostCerts
	I1107 23:55:03.414283   38629 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1107 23:55:03.414296   38629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:55:03.414363   38629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1107 23:55:03.414448   38629 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1107 23:55:03.414459   38629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:55:03.414483   38629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1107 23:55:03.414531   38629 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1107 23:55:03.414538   38629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:55:03.414556   38629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1107 23:55:03.414601   38629 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.test-preload-197747 san=[192.168.39.173 192.168.39.173 localhost 127.0.0.1 minikube test-preload-197747]
	I1107 23:55:03.494912   38629 provision.go:172] copyRemoteCerts
	I1107 23:55:03.494996   38629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:55:03.495026   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:03.497909   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.498252   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.498281   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.498492   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:03.498676   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:03.498866   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:03.499033   38629 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/test-preload-197747/id_rsa Username:docker}
	I1107 23:55:03.590404   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:55:03.612476   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1107 23:55:03.634269   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:55:03.656206   38629 provision.go:86] duration metric: configureAuth took 247.805102ms
	I1107 23:55:03.656229   38629 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:55:03.656419   38629 config.go:182] Loaded profile config "test-preload-197747": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1107 23:55:03.656507   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:03.659146   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.659488   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.659527   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.659686   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:03.659894   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:03.660088   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:03.660218   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:03.660333   38629 main.go:141] libmachine: Using SSH client type: native
	I1107 23:55:03.660631   38629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I1107 23:55:03.660645   38629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:55:03.979080   38629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:55:03.979104   38629 machine.go:91] provisioned docker machine in 854.213854ms
	I1107 23:55:03.979113   38629 start.go:300] post-start starting for "test-preload-197747" (driver="kvm2")
	I1107 23:55:03.979122   38629 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:55:03.979135   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:55:03.979403   38629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:55:03.979437   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:03.982092   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.982427   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:03.982497   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:03.982611   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:03.982786   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:03.982979   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:03.983113   38629 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/test-preload-197747/id_rsa Username:docker}
	I1107 23:55:04.074107   38629 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:55:04.078259   38629 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:55:04.078277   38629 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1107 23:55:04.078345   38629 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1107 23:55:04.078441   38629 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1107 23:55:04.078554   38629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:55:04.086360   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:55:04.109186   38629 start.go:303] post-start completed in 130.061364ms
	I1107 23:55:04.109209   38629 fix.go:56] fixHost completed within 20.002879688s
	I1107 23:55:04.109240   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:04.111567   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:04.111922   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:04.111954   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:04.112115   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:04.112327   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:04.112505   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:04.112630   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:04.112749   38629 main.go:141] libmachine: Using SSH client type: native
	I1107 23:55:04.113151   38629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.173 22 <nil> <nil>}
	I1107 23:55:04.113167   38629 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1107 23:55:04.241557   38629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699401304.190704008
	
	I1107 23:55:04.241585   38629 fix.go:206] guest clock: 1699401304.190704008
	I1107 23:55:04.241592   38629 fix.go:219] Guest: 2023-11-07 23:55:04.190704008 +0000 UTC Remote: 2023-11-07 23:55:04.109212014 +0000 UTC m=+34.628590139 (delta=81.491994ms)
	I1107 23:55:04.241615   38629 fix.go:190] guest clock delta is within tolerance: 81.491994ms
	I1107 23:55:04.241620   38629 start.go:83] releasing machines lock for "test-preload-197747", held for 20.135306123s
	I1107 23:55:04.241638   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:55:04.241892   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetIP
	I1107 23:55:04.244414   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:04.244778   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:04.244809   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:04.244921   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:55:04.245418   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:55:04.245591   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:55:04.245702   38629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:55:04.245743   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:04.245792   38629 ssh_runner.go:195] Run: cat /version.json
	I1107 23:55:04.245817   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:04.248335   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:04.248570   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:04.248676   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:04.248711   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:04.248784   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:04.248970   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:04.249049   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:04.249088   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:04.249118   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:04.249238   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:04.249309   38629 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/test-preload-197747/id_rsa Username:docker}
	I1107 23:55:04.249399   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:04.249528   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:04.249690   38629 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/test-preload-197747/id_rsa Username:docker}
	I1107 23:55:04.341804   38629 ssh_runner.go:195] Run: systemctl --version
	I1107 23:55:04.364427   38629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:55:04.503447   38629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1107 23:55:04.510109   38629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:55:04.510168   38629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:55:04.524671   38629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:55:04.524695   38629 start.go:472] detecting cgroup driver to use...
	I1107 23:55:04.524764   38629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:55:04.538124   38629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:55:04.549877   38629 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:55:04.549943   38629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:55:04.561719   38629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:55:04.573674   38629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:55:04.673222   38629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:55:04.794853   38629 docker.go:219] disabling docker service ...
	I1107 23:55:04.794932   38629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:55:04.808906   38629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:55:04.820427   38629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:55:04.933663   38629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:55:05.045021   38629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:55:05.058478   38629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:55:05.075944   38629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1107 23:55:05.076014   38629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:55:05.084984   38629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:55:05.085048   38629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:55:05.094004   38629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:55:05.102853   38629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:55:05.111724   38629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:55:05.120868   38629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:55:05.128737   38629 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1107 23:55:05.128781   38629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1107 23:55:05.140236   38629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:55:05.149703   38629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:55:05.262789   38629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:55:05.433370   38629 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:55:05.433452   38629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:55:05.438550   38629 start.go:540] Will wait 60s for crictl version
	I1107 23:55:05.438602   38629 ssh_runner.go:195] Run: which crictl
	I1107 23:55:05.442463   38629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:55:05.479680   38629 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1107 23:55:05.479743   38629 ssh_runner.go:195] Run: crio --version
	I1107 23:55:05.522928   38629 ssh_runner.go:195] Run: crio --version
	I1107 23:55:05.575619   38629 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I1107 23:55:05.577159   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetIP
	I1107 23:55:05.579756   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:05.580129   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:05.580151   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:05.580386   38629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:55:05.584539   38629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:55:05.597267   38629 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1107 23:55:05.597338   38629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:55:05.640373   38629 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1107 23:55:05.640442   38629 ssh_runner.go:195] Run: which lz4
	I1107 23:55:05.644420   38629 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 23:55:05.648644   38629 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:55:05.648677   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1107 23:55:07.382503   38629 crio.go:444] Took 1.738114 seconds to copy over tarball
	I1107 23:55:07.382593   38629 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 23:55:10.389322   38629 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.006695872s)
	I1107 23:55:10.389351   38629 crio.go:451] Took 3.006821 seconds to extract the tarball
	I1107 23:55:10.389362   38629 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 23:55:10.429098   38629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:55:10.475029   38629 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1107 23:55:10.475056   38629 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 23:55:10.475134   38629 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:55:10.475154   38629 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1107 23:55:10.475166   38629 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1107 23:55:10.475184   38629 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1107 23:55:10.475217   38629 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1107 23:55:10.475249   38629 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1107 23:55:10.475263   38629 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1107 23:55:10.475345   38629 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1107 23:55:10.476243   38629 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1107 23:55:10.476647   38629 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1107 23:55:10.476655   38629 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1107 23:55:10.476658   38629 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:55:10.476655   38629 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1107 23:55:10.476655   38629 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1107 23:55:10.476656   38629 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1107 23:55:10.476702   38629 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1107 23:55:10.603024   38629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1107 23:55:10.635533   38629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1107 23:55:10.648755   38629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1107 23:55:10.648823   38629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1107 23:55:10.648876   38629 ssh_runner.go:195] Run: which crictl
	I1107 23:55:10.649987   38629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1107 23:55:10.690401   38629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1107 23:55:10.690455   38629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1107 23:55:10.690470   38629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1107 23:55:10.690494   38629 ssh_runner.go:195] Run: which crictl
	I1107 23:55:10.710352   38629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1107 23:55:10.710406   38629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1107 23:55:10.710441   38629 ssh_runner.go:195] Run: which crictl
	I1107 23:55:10.736670   38629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1107 23:55:10.736691   38629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1107 23:55:10.736707   38629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1107 23:55:10.736787   38629 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1107 23:55:10.747216   38629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1107 23:55:10.748506   38629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1107 23:55:10.754223   38629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1107 23:55:10.754521   38629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1107 23:55:10.813183   38629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1107 23:55:10.813215   38629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1107 23:55:10.813230   38629 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1107 23:55:10.813274   38629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1107 23:55:10.813280   38629 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1107 23:55:10.831185   38629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1107 23:55:10.831268   38629 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1107 23:55:10.919822   38629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1107 23:55:10.919860   38629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1107 23:55:10.919908   38629 ssh_runner.go:195] Run: which crictl
	I1107 23:55:10.920014   38629 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1107 23:55:10.920050   38629 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1107 23:55:10.920097   38629 ssh_runner.go:195] Run: which crictl
	I1107 23:55:10.943510   38629 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1107 23:55:10.943547   38629 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1107 23:55:10.943550   38629 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1107 23:55:10.943573   38629 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1107 23:55:10.943596   38629 ssh_runner.go:195] Run: which crictl
	I1107 23:55:10.943599   38629 ssh_runner.go:195] Run: which crictl
	I1107 23:55:10.943630   38629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1107 23:55:11.435390   38629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:55:13.359632   38629 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.528342513s)
	I1107 23:55:13.359673   38629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1107 23:55:13.359693   38629 ssh_runner.go:235] Completed: which crictl: (2.439580477s)
	I1107 23:55:13.359738   38629 ssh_runner.go:235] Completed: which crictl: (2.439814638s)
	I1107 23:55:13.359752   38629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1107 23:55:13.359767   38629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1107 23:55:13.359693   38629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.546394766s)
	I1107 23:55:13.359788   38629 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1107 23:55:13.359807   38629 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1107 23:55:13.359814   38629 ssh_runner.go:235] Completed: which crictl: (2.416204288s)
	I1107 23:55:13.359832   38629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1107 23:55:13.359878   38629 ssh_runner.go:235] Completed: which crictl: (2.416268744s)
	I1107 23:55:13.359889   38629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1107 23:55:13.359919   38629 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.924503237s)
	I1107 23:55:13.359922   38629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1107 23:55:13.499455   38629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1107 23:55:13.499562   38629 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1107 23:55:13.502159   38629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1107 23:55:13.502216   38629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1107 23:55:13.502257   38629 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1107 23:55:13.502302   38629 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1107 23:55:14.255698   38629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1107 23:55:14.255804   38629 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1107 23:55:14.255837   38629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1107 23:55:14.255903   38629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1107 23:55:14.255938   38629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1107 23:55:14.255950   38629 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1107 23:55:14.255978   38629 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1107 23:55:14.256015   38629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1107 23:55:14.261195   38629 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1107 23:55:15.000635   38629 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1107 23:55:15.000687   38629 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1107 23:55:15.000738   38629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1107 23:55:15.443648   38629 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1107 23:55:15.443696   38629 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1107 23:55:15.443748   38629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1107 23:55:17.495070   38629 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.051292979s)
	I1107 23:55:17.495098   38629 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1107 23:55:17.495123   38629 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I1107 23:55:17.495164   38629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1107 23:55:17.637834   38629 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1107 23:55:17.637874   38629 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1107 23:55:17.637920   38629 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1107 23:55:18.079810   38629 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1107 23:55:18.079854   38629 cache_images.go:123] Successfully loaded all cached images
	I1107 23:55:18.079861   38629 cache_images.go:92] LoadImages completed in 7.604785574s
	I1107 23:55:18.079922   38629 ssh_runner.go:195] Run: crio config
	I1107 23:55:18.132890   38629 cni.go:84] Creating CNI manager for ""
	I1107 23:55:18.132912   38629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:55:18.132930   38629 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:55:18.132950   38629 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.173 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-197747 NodeName:test-preload-197747 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:55:18.133111   38629 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-197747"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:55:18.133183   38629 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-197747 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-197747 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:55:18.133232   38629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1107 23:55:18.142561   38629 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:55:18.142640   38629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:55:18.151424   38629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1107 23:55:18.166778   38629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:55:18.182274   38629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1107 23:55:18.198405   38629 ssh_runner.go:195] Run: grep 192.168.39.173	control-plane.minikube.internal$ /etc/hosts
	I1107 23:55:18.201950   38629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:55:18.212945   38629 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747 for IP: 192.168.39.173
	I1107 23:55:18.212977   38629 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:55:18.213114   38629 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1107 23:55:18.213153   38629 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1107 23:55:18.213211   38629 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/client.key
	I1107 23:55:18.213259   38629 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/apiserver.key.43b841b8
	I1107 23:55:18.213299   38629 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/proxy-client.key
	I1107 23:55:18.213407   38629 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1107 23:55:18.213453   38629 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1107 23:55:18.213463   38629 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:55:18.213485   38629 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:55:18.213509   38629 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:55:18.213530   38629 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1107 23:55:18.213573   38629 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:55:18.214172   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:55:18.237328   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 23:55:18.259724   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:55:18.282147   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 23:55:18.303723   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:55:18.326221   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:55:18.348417   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:55:18.370942   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 23:55:18.393341   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:55:18.415352   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1107 23:55:18.437173   38629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1107 23:55:18.459169   38629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:55:18.474710   38629 ssh_runner.go:195] Run: openssl version
	I1107 23:55:18.479812   38629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:55:18.488709   38629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:55:18.493169   38629 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:55:18.493205   38629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:55:18.498427   38629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:55:18.507233   38629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1107 23:55:18.515979   38629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1107 23:55:18.520235   38629 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:55:18.520277   38629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1107 23:55:18.525495   38629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1107 23:55:18.534253   38629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1107 23:55:18.543199   38629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1107 23:55:18.547591   38629 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:55:18.547633   38629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1107 23:55:18.552750   38629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:55:18.561419   38629 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:55:18.565623   38629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1107 23:55:18.571171   38629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1107 23:55:18.576585   38629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1107 23:55:18.581954   38629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1107 23:55:18.587340   38629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1107 23:55:18.593349   38629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1107 23:55:18.598782   38629 kubeadm.go:404] StartCluster: {Name:test-preload-197747 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-197747 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:55:18.598878   38629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:55:18.598919   38629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:55:18.639452   38629 cri.go:89] found id: ""
	I1107 23:55:18.639519   38629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:55:18.648454   38629 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1107 23:55:18.648471   38629 kubeadm.go:636] restartCluster start
	I1107 23:55:18.648528   38629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1107 23:55:18.656643   38629 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:18.657145   38629 kubeconfig.go:135] verify returned: extract IP: "test-preload-197747" does not appear in /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:55:18.657279   38629 kubeconfig.go:146] "test-preload-197747" context is missing from /home/jenkins/minikube-integration/17585-9647/kubeconfig - will repair!
	I1107 23:55:18.657651   38629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:55:18.658443   38629 kapi.go:59] client config for test-preload-197747: &rest.Config{Host:"https://192.168.39.173:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:55:18.659670   38629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1107 23:55:18.667777   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:18.667825   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:18.678469   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:18.678483   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:18.678516   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:18.687842   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:19.188862   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:19.188929   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:19.200023   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:19.688862   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:19.688933   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:19.700382   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:20.187904   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:20.187997   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:20.198932   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:20.688761   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:20.688861   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:20.699925   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:21.188441   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:21.188506   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:21.199542   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:21.688050   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:21.688130   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:21.699438   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:22.187977   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:22.188073   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:22.199466   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:22.688026   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:22.688101   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:22.699456   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:23.187991   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:23.188075   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:23.199182   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:23.688794   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:23.688866   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:23.699723   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:24.188855   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:24.188939   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:24.199757   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:24.688331   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:24.688421   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:24.699765   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:25.188332   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:25.188433   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:25.199235   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:25.688847   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:25.688915   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:25.699733   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:26.188283   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:26.188370   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:26.199122   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:26.688754   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:26.688832   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:26.699623   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:27.188225   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:27.188304   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:27.198986   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:27.688601   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:27.688675   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:27.698969   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:28.188589   38629 api_server.go:166] Checking apiserver status ...
	I1107 23:55:28.188663   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1107 23:55:28.199301   38629 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1107 23:55:28.667885   38629 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1107 23:55:28.667922   38629 kubeadm.go:1128] stopping kube-system containers ...
	I1107 23:55:28.667934   38629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1107 23:55:28.667999   38629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:55:28.708348   38629 cri.go:89] found id: ""
	I1107 23:55:28.708420   38629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1107 23:55:28.723305   38629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:55:28.731845   38629 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:55:28.731904   38629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:55:28.740605   38629 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1107 23:55:28.740627   38629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:55:28.850367   38629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:55:29.644261   38629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:55:29.984697   38629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:55:30.064329   38629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:55:30.132661   38629 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:55:30.132742   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:55:30.162431   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:55:30.684907   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:55:31.184942   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:55:31.684556   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:55:32.184768   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:55:32.205917   38629 api_server.go:72] duration metric: took 2.073257483s to wait for apiserver process to appear ...
	I1107 23:55:32.205938   38629 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:55:32.205962   38629 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I1107 23:55:37.206966   38629 api_server.go:269] stopped: https://192.168.39.173:8443/healthz: Get "https://192.168.39.173:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1107 23:55:37.207008   38629 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I1107 23:55:37.790398   38629 api_server.go:279] https://192.168.39.173:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1107 23:55:37.790436   38629 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1107 23:55:38.291106   38629 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I1107 23:55:38.303281   38629 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 23:55:38.303309   38629 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 23:55:38.791042   38629 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I1107 23:55:38.798889   38629 api_server.go:279] https://192.168.39.173:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1107 23:55:38.798914   38629 api_server.go:103] status: https://192.168.39.173:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1107 23:55:39.290999   38629 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I1107 23:55:39.296676   38629 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I1107 23:55:39.307210   38629 api_server.go:141] control plane version: v1.24.4
	I1107 23:55:39.307231   38629 api_server.go:131] duration metric: took 7.101286864s to wait for apiserver health ...
	I1107 23:55:39.307240   38629 cni.go:84] Creating CNI manager for ""
	I1107 23:55:39.307248   38629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:55:39.309363   38629 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1107 23:55:39.311050   38629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1107 23:55:39.321134   38629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1107 23:55:39.342433   38629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:55:39.352373   38629 system_pods.go:59] 8 kube-system pods found
	I1107 23:55:39.352396   38629 system_pods.go:61] "coredns-6d4b75cb6d-dxgjv" [d1983145-4001-42a7-b847-028bae4268c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:55:39.352403   38629 system_pods.go:61] "coredns-6d4b75cb6d-wcx69" [48e939c1-85ba-4d4d-b12d-e285906a9dbf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1107 23:55:39.352410   38629 system_pods.go:61] "etcd-test-preload-197747" [3e61dac7-733e-4b21-a458-b15919460608] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1107 23:55:39.352431   38629 system_pods.go:61] "kube-apiserver-test-preload-197747" [ed90a414-cda9-4d04-a8b9-5c4fe602c3aa] Running
	I1107 23:55:39.352440   38629 system_pods.go:61] "kube-controller-manager-test-preload-197747" [51fc252a-2b1c-4adf-99f2-ea42a495dc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1107 23:55:39.352450   38629 system_pods.go:61] "kube-proxy-4llrf" [82563fb2-5baa-47f1-8bde-5e606e315396] Running
	I1107 23:55:39.352458   38629 system_pods.go:61] "kube-scheduler-test-preload-197747" [d0087c30-0234-403a-b04b-dcb4921ed6fa] Running
	I1107 23:55:39.352468   38629 system_pods.go:61] "storage-provisioner" [5eff1c4e-08ce-42a5-a739-35670b1bd74d] Running
	I1107 23:55:39.352475   38629 system_pods.go:74] duration metric: took 10.021877ms to wait for pod list to return data ...
	I1107 23:55:39.352484   38629 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:55:39.367056   38629 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:55:39.367086   38629 node_conditions.go:123] node cpu capacity is 2
	I1107 23:55:39.367096   38629 node_conditions.go:105] duration metric: took 14.607873ms to run NodePressure ...
	I1107 23:55:39.367112   38629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1107 23:55:39.655798   38629 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1107 23:55:39.660464   38629 kubeadm.go:787] kubelet initialised
	I1107 23:55:39.660484   38629 kubeadm.go:788] duration metric: took 4.66493ms waiting for restarted kubelet to initialise ...
	I1107 23:55:39.660491   38629 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:55:39.666783   38629 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dxgjv" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:39.674103   38629 pod_ready.go:97] node "test-preload-197747" hosting pod "coredns-6d4b75cb6d-dxgjv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:39.674126   38629 pod_ready.go:81] duration metric: took 7.320692ms waiting for pod "coredns-6d4b75cb6d-dxgjv" in "kube-system" namespace to be "Ready" ...
	E1107 23:55:39.674134   38629 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-197747" hosting pod "coredns-6d4b75cb6d-dxgjv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:39.674147   38629 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wcx69" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:39.689640   38629 pod_ready.go:97] node "test-preload-197747" hosting pod "coredns-6d4b75cb6d-wcx69" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:39.689663   38629 pod_ready.go:81] duration metric: took 15.50413ms waiting for pod "coredns-6d4b75cb6d-wcx69" in "kube-system" namespace to be "Ready" ...
	E1107 23:55:39.689672   38629 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-197747" hosting pod "coredns-6d4b75cb6d-wcx69" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:39.689680   38629 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:39.699532   38629 pod_ready.go:97] node "test-preload-197747" hosting pod "etcd-test-preload-197747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:39.699557   38629 pod_ready.go:81] duration metric: took 9.859838ms waiting for pod "etcd-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	E1107 23:55:39.699565   38629 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-197747" hosting pod "etcd-test-preload-197747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:39.699572   38629 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:39.753877   38629 pod_ready.go:97] node "test-preload-197747" hosting pod "kube-apiserver-test-preload-197747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:39.753902   38629 pod_ready.go:81] duration metric: took 54.314933ms waiting for pod "kube-apiserver-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	E1107 23:55:39.753912   38629 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-197747" hosting pod "kube-apiserver-test-preload-197747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:39.753918   38629 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:40.149701   38629 pod_ready.go:97] node "test-preload-197747" hosting pod "kube-controller-manager-test-preload-197747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:40.149736   38629 pod_ready.go:81] duration metric: took 395.807373ms waiting for pod "kube-controller-manager-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	E1107 23:55:40.149750   38629 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-197747" hosting pod "kube-controller-manager-test-preload-197747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:40.149757   38629 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4llrf" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:40.547794   38629 pod_ready.go:97] node "test-preload-197747" hosting pod "kube-proxy-4llrf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:40.547821   38629 pod_ready.go:81] duration metric: took 398.052185ms waiting for pod "kube-proxy-4llrf" in "kube-system" namespace to be "Ready" ...
	E1107 23:55:40.547830   38629 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-197747" hosting pod "kube-proxy-4llrf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:40.547835   38629 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:40.947263   38629 pod_ready.go:97] node "test-preload-197747" hosting pod "kube-scheduler-test-preload-197747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:40.947292   38629 pod_ready.go:81] duration metric: took 399.451181ms waiting for pod "kube-scheduler-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	E1107 23:55:40.947301   38629 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-197747" hosting pod "kube-scheduler-test-preload-197747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:40.947311   38629 pod_ready.go:38] duration metric: took 1.286807773s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:55:40.947331   38629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:55:40.959581   38629 ops.go:34] apiserver oom_adj: -16
	I1107 23:55:40.959604   38629 kubeadm.go:640] restartCluster took 22.311126548s
	I1107 23:55:40.959614   38629 kubeadm.go:406] StartCluster complete in 22.360836391s
	I1107 23:55:40.959633   38629 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:55:40.959713   38629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:55:40.960344   38629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:55:40.960547   38629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:55:40.960640   38629 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:55:40.960711   38629 addons.go:69] Setting storage-provisioner=true in profile "test-preload-197747"
	I1107 23:55:40.960746   38629 addons.go:231] Setting addon storage-provisioner=true in "test-preload-197747"
	I1107 23:55:40.960726   38629 addons.go:69] Setting default-storageclass=true in profile "test-preload-197747"
	I1107 23:55:40.960762   38629 config.go:182] Loaded profile config "test-preload-197747": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1107 23:55:40.960779   38629 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-197747"
	W1107 23:55:40.960754   38629 addons.go:240] addon storage-provisioner should already be in state true
	I1107 23:55:40.960853   38629 host.go:66] Checking if "test-preload-197747" exists ...
	I1107 23:55:40.961133   38629 kapi.go:59] client config for test-preload-197747: &rest.Config{Host:"https://192.168.39.173:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:55:40.961244   38629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:55:40.961244   38629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:55:40.961288   38629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:55:40.961297   38629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:55:40.964789   38629 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-197747" context rescaled to 1 replicas
	I1107 23:55:40.964827   38629 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.173 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:55:40.966811   38629 out.go:177] * Verifying Kubernetes components...
	I1107 23:55:40.968184   38629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:55:40.975832   38629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44461
	I1107 23:55:40.976254   38629 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:55:40.976369   38629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I1107 23:55:40.976767   38629 main.go:141] libmachine: Using API Version  1
	I1107 23:55:40.976789   38629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:55:40.976835   38629 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:55:40.977125   38629 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:55:40.977283   38629 main.go:141] libmachine: Using API Version  1
	I1107 23:55:40.977303   38629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:55:40.977382   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetState
	I1107 23:55:40.977573   38629 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:55:40.978132   38629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:55:40.978168   38629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:55:40.979573   38629 kapi.go:59] client config for test-preload-197747: &rest.Config{Host:"https://192.168.39.173:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/test-preload-197747/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:55:40.979849   38629 addons.go:231] Setting addon default-storageclass=true in "test-preload-197747"
	W1107 23:55:40.979864   38629 addons.go:240] addon default-storageclass should already be in state true
	I1107 23:55:40.979886   38629 host.go:66] Checking if "test-preload-197747" exists ...
	I1107 23:55:40.980207   38629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:55:40.980238   38629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:55:40.993394   38629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42873
	I1107 23:55:40.993581   38629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34871
	I1107 23:55:40.993770   38629 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:55:40.993873   38629 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:55:40.994434   38629 main.go:141] libmachine: Using API Version  1
	I1107 23:55:40.994453   38629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:55:40.994618   38629 main.go:141] libmachine: Using API Version  1
	I1107 23:55:40.994639   38629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:55:40.994807   38629 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:55:40.994968   38629 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:55:40.995028   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetState
	I1107 23:55:40.995552   38629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:55:40.995602   38629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:55:40.996739   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:55:40.999097   38629 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:55:41.000755   38629 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:55:41.000770   38629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:55:41.000782   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:41.003743   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:41.004152   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:41.004180   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:41.004331   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:41.004490   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:41.004724   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:41.004880   38629 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/test-preload-197747/id_rsa Username:docker}
	I1107 23:55:41.009584   38629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I1107 23:55:41.009883   38629 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:55:41.010319   38629 main.go:141] libmachine: Using API Version  1
	I1107 23:55:41.010344   38629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:55:41.010652   38629 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:55:41.010830   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetState
	I1107 23:55:41.012089   38629 main.go:141] libmachine: (test-preload-197747) Calling .DriverName
	I1107 23:55:41.012301   38629 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:55:41.012316   38629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:55:41.012332   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHHostname
	I1107 23:55:41.014870   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:41.015265   38629 main.go:141] libmachine: (test-preload-197747) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:4b:79", ip: ""} in network mk-test-preload-197747: {Iface:virbr1 ExpiryTime:2023-11-08 00:54:56 +0000 UTC Type:0 Mac:52:54:00:1c:4b:79 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:test-preload-197747 Clientid:01:52:54:00:1c:4b:79}
	I1107 23:55:41.015292   38629 main.go:141] libmachine: (test-preload-197747) DBG | domain test-preload-197747 has defined IP address 192.168.39.173 and MAC address 52:54:00:1c:4b:79 in network mk-test-preload-197747
	I1107 23:55:41.015492   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHPort
	I1107 23:55:41.015631   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHKeyPath
	I1107 23:55:41.015772   38629 main.go:141] libmachine: (test-preload-197747) Calling .GetSSHUsername
	I1107 23:55:41.015892   38629 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/test-preload-197747/id_rsa Username:docker}
	I1107 23:55:41.165145   38629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:55:41.172690   38629 node_ready.go:35] waiting up to 6m0s for node "test-preload-197747" to be "Ready" ...
	I1107 23:55:41.172756   38629 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1107 23:55:41.188125   38629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:55:42.143811   38629 main.go:141] libmachine: Making call to close driver server
	I1107 23:55:42.143817   38629 main.go:141] libmachine: Making call to close driver server
	I1107 23:55:42.143830   38629 main.go:141] libmachine: (test-preload-197747) Calling .Close
	I1107 23:55:42.143839   38629 main.go:141] libmachine: (test-preload-197747) Calling .Close
	I1107 23:55:42.144116   38629 main.go:141] libmachine: (test-preload-197747) DBG | Closing plugin on server side
	I1107 23:55:42.144146   38629 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:55:42.144181   38629 main.go:141] libmachine: (test-preload-197747) DBG | Closing plugin on server side
	I1107 23:55:42.144197   38629 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:55:42.144212   38629 main.go:141] libmachine: Making call to close driver server
	I1107 23:55:42.144228   38629 main.go:141] libmachine: (test-preload-197747) Calling .Close
	I1107 23:55:42.144298   38629 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:55:42.144318   38629 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:55:42.144334   38629 main.go:141] libmachine: Making call to close driver server
	I1107 23:55:42.144345   38629 main.go:141] libmachine: (test-preload-197747) Calling .Close
	I1107 23:55:42.144434   38629 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:55:42.144470   38629 main.go:141] libmachine: (test-preload-197747) DBG | Closing plugin on server side
	I1107 23:55:42.144508   38629 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:55:42.144599   38629 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:55:42.144614   38629 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:55:42.151076   38629 main.go:141] libmachine: Making call to close driver server
	I1107 23:55:42.151092   38629 main.go:141] libmachine: (test-preload-197747) Calling .Close
	I1107 23:55:42.151312   38629 main.go:141] libmachine: Successfully made call to close driver server
	I1107 23:55:42.151328   38629 main.go:141] libmachine: Making call to close connection to plugin binary
	I1107 23:55:42.151348   38629 main.go:141] libmachine: (test-preload-197747) DBG | Closing plugin on server side
	I1107 23:55:42.153307   38629 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 23:55:42.154536   38629 addons.go:502] enable addons completed in 1.193910786s: enabled=[storage-provisioner default-storageclass]
	I1107 23:55:43.354462   38629 node_ready.go:58] node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:45.851872   38629 node_ready.go:58] node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:47.852506   38629 node_ready.go:58] node "test-preload-197747" has status "Ready":"False"
	I1107 23:55:48.352313   38629 node_ready.go:49] node "test-preload-197747" has status "Ready":"True"
	I1107 23:55:48.352342   38629 node_ready.go:38] duration metric: took 7.179631884s waiting for node "test-preload-197747" to be "Ready" ...
	I1107 23:55:48.352352   38629 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:55:48.357563   38629 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wcx69" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:48.362051   38629 pod_ready.go:92] pod "coredns-6d4b75cb6d-wcx69" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:48.362074   38629 pod_ready.go:81] duration metric: took 4.487389ms waiting for pod "coredns-6d4b75cb6d-wcx69" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:48.362081   38629 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:48.366307   38629 pod_ready.go:92] pod "etcd-test-preload-197747" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:48.366323   38629 pod_ready.go:81] duration metric: took 4.236492ms waiting for pod "etcd-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:48.366329   38629 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:49.379266   38629 pod_ready.go:92] pod "kube-apiserver-test-preload-197747" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:49.379289   38629 pod_ready.go:81] duration metric: took 1.012953318s waiting for pod "kube-apiserver-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:49.379298   38629 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.461922   38629 pod_ready.go:92] pod "kube-controller-manager-test-preload-197747" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:51.461951   38629 pod_ready.go:81] duration metric: took 2.08264657s waiting for pod "kube-controller-manager-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.461963   38629 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4llrf" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.552748   38629 pod_ready.go:92] pod "kube-proxy-4llrf" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:51.552775   38629 pod_ready.go:81] duration metric: took 90.803597ms waiting for pod "kube-proxy-4llrf" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.552787   38629 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.953544   38629 pod_ready.go:92] pod "kube-scheduler-test-preload-197747" in "kube-system" namespace has status "Ready":"True"
	I1107 23:55:51.953575   38629 pod_ready.go:81] duration metric: took 400.778677ms waiting for pod "kube-scheduler-test-preload-197747" in "kube-system" namespace to be "Ready" ...
	I1107 23:55:51.953592   38629 pod_ready.go:38] duration metric: took 3.601227846s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:55:51.953606   38629 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:55:51.953651   38629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:55:51.967871   38629 api_server.go:72] duration metric: took 11.003017492s to wait for apiserver process to appear ...
	I1107 23:55:51.967891   38629 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:55:51.967904   38629 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I1107 23:55:51.973262   38629 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I1107 23:55:51.974394   38629 api_server.go:141] control plane version: v1.24.4
	I1107 23:55:51.974435   38629 api_server.go:131] duration metric: took 6.538158ms to wait for apiserver health ...
	I1107 23:55:51.974446   38629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:55:52.155601   38629 system_pods.go:59] 7 kube-system pods found
	I1107 23:55:52.155627   38629 system_pods.go:61] "coredns-6d4b75cb6d-wcx69" [48e939c1-85ba-4d4d-b12d-e285906a9dbf] Running
	I1107 23:55:52.155632   38629 system_pods.go:61] "etcd-test-preload-197747" [3e61dac7-733e-4b21-a458-b15919460608] Running
	I1107 23:55:52.155636   38629 system_pods.go:61] "kube-apiserver-test-preload-197747" [ed90a414-cda9-4d04-a8b9-5c4fe602c3aa] Running
	I1107 23:55:52.155642   38629 system_pods.go:61] "kube-controller-manager-test-preload-197747" [51fc252a-2b1c-4adf-99f2-ea42a495dc8a] Running
	I1107 23:55:52.155646   38629 system_pods.go:61] "kube-proxy-4llrf" [82563fb2-5baa-47f1-8bde-5e606e315396] Running
	I1107 23:55:52.155650   38629 system_pods.go:61] "kube-scheduler-test-preload-197747" [d0087c30-0234-403a-b04b-dcb4921ed6fa] Running
	I1107 23:55:52.155655   38629 system_pods.go:61] "storage-provisioner" [5eff1c4e-08ce-42a5-a739-35670b1bd74d] Running
	I1107 23:55:52.155661   38629 system_pods.go:74] duration metric: took 181.209681ms to wait for pod list to return data ...
	I1107 23:55:52.155666   38629 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:55:52.351742   38629 default_sa.go:45] found service account: "default"
	I1107 23:55:52.351775   38629 default_sa.go:55] duration metric: took 196.102314ms for default service account to be created ...
	I1107 23:55:52.351785   38629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:55:52.555374   38629 system_pods.go:86] 7 kube-system pods found
	I1107 23:55:52.555399   38629 system_pods.go:89] "coredns-6d4b75cb6d-wcx69" [48e939c1-85ba-4d4d-b12d-e285906a9dbf] Running
	I1107 23:55:52.555403   38629 system_pods.go:89] "etcd-test-preload-197747" [3e61dac7-733e-4b21-a458-b15919460608] Running
	I1107 23:55:52.555408   38629 system_pods.go:89] "kube-apiserver-test-preload-197747" [ed90a414-cda9-4d04-a8b9-5c4fe602c3aa] Running
	I1107 23:55:52.555411   38629 system_pods.go:89] "kube-controller-manager-test-preload-197747" [51fc252a-2b1c-4adf-99f2-ea42a495dc8a] Running
	I1107 23:55:52.555415   38629 system_pods.go:89] "kube-proxy-4llrf" [82563fb2-5baa-47f1-8bde-5e606e315396] Running
	I1107 23:55:52.555419   38629 system_pods.go:89] "kube-scheduler-test-preload-197747" [d0087c30-0234-403a-b04b-dcb4921ed6fa] Running
	I1107 23:55:52.555422   38629 system_pods.go:89] "storage-provisioner" [5eff1c4e-08ce-42a5-a739-35670b1bd74d] Running
	I1107 23:55:52.555427   38629 system_pods.go:126] duration metric: took 203.636997ms to wait for k8s-apps to be running ...
	I1107 23:55:52.555440   38629 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:55:52.555494   38629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:55:52.569377   38629 system_svc.go:56] duration metric: took 13.935436ms WaitForService to wait for kubelet.
	I1107 23:55:52.569408   38629 kubeadm.go:581] duration metric: took 11.60455788s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:55:52.569432   38629 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:55:52.753136   38629 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1107 23:55:52.753162   38629 node_conditions.go:123] node cpu capacity is 2
	I1107 23:55:52.753170   38629 node_conditions.go:105] duration metric: took 183.733747ms to run NodePressure ...
	I1107 23:55:52.753180   38629 start.go:228] waiting for startup goroutines ...
	I1107 23:55:52.753186   38629 start.go:233] waiting for cluster config update ...
	I1107 23:55:52.753195   38629 start.go:242] writing updated cluster config ...
	I1107 23:55:52.753470   38629 ssh_runner.go:195] Run: rm -f paused
	I1107 23:55:52.798624   38629 start.go:600] kubectl: 1.28.3, cluster: 1.24.4 (minor skew: 4)
	I1107 23:55:52.800712   38629 out.go:177] 
	W1107 23:55:52.802240   38629 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.24.4.
	I1107 23:55:52.803514   38629 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1107 23:55:52.804799   38629 out.go:177] * Done! kubectl is now configured to use "test-preload-197747" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-07 23:54:55 UTC, ends at Tue 2023-11-07 23:55:53 UTC. --
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.743383047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401353743371415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=4fa9800f-f3f8-411e-92c0-953a999739e3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.743946434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3be19cff-0ff7-43fb-b374-dba4cbc9782c name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.743998128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3be19cff-0ff7-43fb-b374-dba4cbc9782c name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.744149988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fb49fe48cbbeafb21a04b6ca82b4cc7580583e0b572f839198e4dcef0b00fb1,PodSandboxId:ecc096537b692faa311016d42c0f3182233fc2d2acbc435fa2746b40887102ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1699401342913287467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wcx69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e939c1-85ba-4d4d-b12d-e285906a9dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 7c276b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780e417ae969cd6eec2d541873f8af094abaea1270f0d43dadbd0eace7082fc4,PodSandboxId:f77c1afdb5fa5a88da1fd1538d1dc70b1fd5886bc540876796a8af0ff6d39eb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699401340054047289,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 5eff1c4e-08ce-42a5-a739-35670b1bd74d,},Annotations:map[string]string{io.kubernetes.container.hash: 5d06724a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e406b7ce1bc831008515a8296979ee9f5b9f36636368a0876a03b23d8f07850,PodSandboxId:eccd1530bc692e798cc9026e2c6a6fa0793f94009d0dd9106e9e163be6a23abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1699401339513266615,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4llrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
82563fb2-5baa-47f1-8bde-5e606e315396,},Annotations:map[string]string{io.kubernetes.container.hash: 634b1752,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7af3748cfb1b8ae0d1685a9524645f258749c819d32d2971624206619d6a56,PodSandboxId:9683859f90ca62f7920ac31ff4270190a290996ed290e5970c57f0f0b6fb6685,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1699401331935434024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b7771bdf
5e55c827a9aefd2eb6b1c6,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aebd8598b32d4c2c2e464d0a24810f924d5c229715d208eaf3792ca9d41d9e4,PodSandboxId:ad61e703f0ee557ed5bbf0f7bf198f2f3fbe0bb508ba254f3afc9dd0d134d4b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1699401331506974496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6904e78a09866b78c3b1f571065eeab9,},Annotations:map[string]string
{io.kubernetes.container.hash: 29fe179,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa0525b8638cd679e8ea1d343306fd62ca14eabb1de5039bdd424465f0bf61d,PodSandboxId:6efc89485ca44c5a791f09aa69d06cb7ea50727bd58a44007f003f8062f6c37a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1699401331373716088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a77fcc0af92f54ae4968e49945b08d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8540ddd9421b097f89c4cb94fa24bc44622ecc9ad480db52ff7e325807eb7665,PodSandboxId:9196aed0055510a6ee6764c2c791fbf5f06bfb27fd64917096c0b33eafecf493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1699401331160353650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae090c22ab25992f4c2057e94142b7ee,},Annotations:map[string
]string{io.kubernetes.container.hash: 5a36d78b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3be19cff-0ff7-43fb-b374-dba4cbc9782c name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.783376252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4c1db4b9-afed-4b60-aa9b-78e5d2ffb829 name=/runtime.v1.RuntimeService/Version
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.783437128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4c1db4b9-afed-4b60-aa9b-78e5d2ffb829 name=/runtime.v1.RuntimeService/Version
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.786676770Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ec5a6b8f-a8bc-4404-9fb5-e9184dfcd241 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.787132482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401353787117458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=ec5a6b8f-a8bc-4404-9fb5-e9184dfcd241 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.788861410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1f150c57-cbaf-4b90-9734-38ab6baed7b0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.788928488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1f150c57-cbaf-4b90-9734-38ab6baed7b0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.789186623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fb49fe48cbbeafb21a04b6ca82b4cc7580583e0b572f839198e4dcef0b00fb1,PodSandboxId:ecc096537b692faa311016d42c0f3182233fc2d2acbc435fa2746b40887102ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1699401342913287467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wcx69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e939c1-85ba-4d4d-b12d-e285906a9dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 7c276b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780e417ae969cd6eec2d541873f8af094abaea1270f0d43dadbd0eace7082fc4,PodSandboxId:f77c1afdb5fa5a88da1fd1538d1dc70b1fd5886bc540876796a8af0ff6d39eb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699401340054047289,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 5eff1c4e-08ce-42a5-a739-35670b1bd74d,},Annotations:map[string]string{io.kubernetes.container.hash: 5d06724a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e406b7ce1bc831008515a8296979ee9f5b9f36636368a0876a03b23d8f07850,PodSandboxId:eccd1530bc692e798cc9026e2c6a6fa0793f94009d0dd9106e9e163be6a23abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1699401339513266615,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4llrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
82563fb2-5baa-47f1-8bde-5e606e315396,},Annotations:map[string]string{io.kubernetes.container.hash: 634b1752,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7af3748cfb1b8ae0d1685a9524645f258749c819d32d2971624206619d6a56,PodSandboxId:9683859f90ca62f7920ac31ff4270190a290996ed290e5970c57f0f0b6fb6685,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1699401331935434024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b7771bdf
5e55c827a9aefd2eb6b1c6,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aebd8598b32d4c2c2e464d0a24810f924d5c229715d208eaf3792ca9d41d9e4,PodSandboxId:ad61e703f0ee557ed5bbf0f7bf198f2f3fbe0bb508ba254f3afc9dd0d134d4b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1699401331506974496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6904e78a09866b78c3b1f571065eeab9,},Annotations:map[string]string
{io.kubernetes.container.hash: 29fe179,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa0525b8638cd679e8ea1d343306fd62ca14eabb1de5039bdd424465f0bf61d,PodSandboxId:6efc89485ca44c5a791f09aa69d06cb7ea50727bd58a44007f003f8062f6c37a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1699401331373716088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a77fcc0af92f54ae4968e49945b08d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8540ddd9421b097f89c4cb94fa24bc44622ecc9ad480db52ff7e325807eb7665,PodSandboxId:9196aed0055510a6ee6764c2c791fbf5f06bfb27fd64917096c0b33eafecf493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1699401331160353650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae090c22ab25992f4c2057e94142b7ee,},Annotations:map[string
]string{io.kubernetes.container.hash: 5a36d78b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1f150c57-cbaf-4b90-9734-38ab6baed7b0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.827664028Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8b392fa5-53ce-4e49-b0c5-e0a07f82a638 name=/runtime.v1.RuntimeService/Version
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.827720744Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8b392fa5-53ce-4e49-b0c5-e0a07f82a638 name=/runtime.v1.RuntimeService/Version
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.829207672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5bd2c75f-b435-43e9-b266-711c4677bcaf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.829705801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401353829690033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=5bd2c75f-b435-43e9-b266-711c4677bcaf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.830305703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a221fb44-f654-41d5-90c5-4a782ea2398a name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.830351206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a221fb44-f654-41d5-90c5-4a782ea2398a name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.830587109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fb49fe48cbbeafb21a04b6ca82b4cc7580583e0b572f839198e4dcef0b00fb1,PodSandboxId:ecc096537b692faa311016d42c0f3182233fc2d2acbc435fa2746b40887102ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1699401342913287467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wcx69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e939c1-85ba-4d4d-b12d-e285906a9dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 7c276b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780e417ae969cd6eec2d541873f8af094abaea1270f0d43dadbd0eace7082fc4,PodSandboxId:f77c1afdb5fa5a88da1fd1538d1dc70b1fd5886bc540876796a8af0ff6d39eb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699401340054047289,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 5eff1c4e-08ce-42a5-a739-35670b1bd74d,},Annotations:map[string]string{io.kubernetes.container.hash: 5d06724a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e406b7ce1bc831008515a8296979ee9f5b9f36636368a0876a03b23d8f07850,PodSandboxId:eccd1530bc692e798cc9026e2c6a6fa0793f94009d0dd9106e9e163be6a23abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1699401339513266615,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4llrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
82563fb2-5baa-47f1-8bde-5e606e315396,},Annotations:map[string]string{io.kubernetes.container.hash: 634b1752,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7af3748cfb1b8ae0d1685a9524645f258749c819d32d2971624206619d6a56,PodSandboxId:9683859f90ca62f7920ac31ff4270190a290996ed290e5970c57f0f0b6fb6685,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1699401331935434024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b7771bdf
5e55c827a9aefd2eb6b1c6,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aebd8598b32d4c2c2e464d0a24810f924d5c229715d208eaf3792ca9d41d9e4,PodSandboxId:ad61e703f0ee557ed5bbf0f7bf198f2f3fbe0bb508ba254f3afc9dd0d134d4b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1699401331506974496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6904e78a09866b78c3b1f571065eeab9,},Annotations:map[string]string
{io.kubernetes.container.hash: 29fe179,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa0525b8638cd679e8ea1d343306fd62ca14eabb1de5039bdd424465f0bf61d,PodSandboxId:6efc89485ca44c5a791f09aa69d06cb7ea50727bd58a44007f003f8062f6c37a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1699401331373716088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a77fcc0af92f54ae4968e49945b08d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8540ddd9421b097f89c4cb94fa24bc44622ecc9ad480db52ff7e325807eb7665,PodSandboxId:9196aed0055510a6ee6764c2c791fbf5f06bfb27fd64917096c0b33eafecf493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1699401331160353650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae090c22ab25992f4c2057e94142b7ee,},Annotations:map[string
]string{io.kubernetes.container.hash: 5a36d78b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a221fb44-f654-41d5-90c5-4a782ea2398a name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.865061957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=035f7680-940f-4b18-bd5e-3082e90168df name=/runtime.v1.RuntimeService/Version
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.865145726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=035f7680-940f-4b18-bd5e-3082e90168df name=/runtime.v1.RuntimeService/Version
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.866610029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=34937d9e-d326-4122-ab0c-7f8f58ebfab8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.867012226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401353866997245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=34937d9e-d326-4122-ab0c-7f8f58ebfab8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.867943491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f6c88d64-3e2a-4b7c-9b06-360985452d52 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.867986869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f6c88d64-3e2a-4b7c-9b06-360985452d52 name=/runtime.v1.RuntimeService/ListContainers
	Nov 07 23:55:53 test-preload-197747 crio[715]: time="2023-11-07 23:55:53.868138429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fb49fe48cbbeafb21a04b6ca82b4cc7580583e0b572f839198e4dcef0b00fb1,PodSandboxId:ecc096537b692faa311016d42c0f3182233fc2d2acbc435fa2746b40887102ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1699401342913287467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wcx69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48e939c1-85ba-4d4d-b12d-e285906a9dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 7c276b27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:780e417ae969cd6eec2d541873f8af094abaea1270f0d43dadbd0eace7082fc4,PodSandboxId:f77c1afdb5fa5a88da1fd1538d1dc70b1fd5886bc540876796a8af0ff6d39eb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699401340054047289,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 5eff1c4e-08ce-42a5-a739-35670b1bd74d,},Annotations:map[string]string{io.kubernetes.container.hash: 5d06724a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e406b7ce1bc831008515a8296979ee9f5b9f36636368a0876a03b23d8f07850,PodSandboxId:eccd1530bc692e798cc9026e2c6a6fa0793f94009d0dd9106e9e163be6a23abd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1699401339513266615,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4llrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
82563fb2-5baa-47f1-8bde-5e606e315396,},Annotations:map[string]string{io.kubernetes.container.hash: 634b1752,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7af3748cfb1b8ae0d1685a9524645f258749c819d32d2971624206619d6a56,PodSandboxId:9683859f90ca62f7920ac31ff4270190a290996ed290e5970c57f0f0b6fb6685,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1699401331935434024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b7771bdf
5e55c827a9aefd2eb6b1c6,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aebd8598b32d4c2c2e464d0a24810f924d5c229715d208eaf3792ca9d41d9e4,PodSandboxId:ad61e703f0ee557ed5bbf0f7bf198f2f3fbe0bb508ba254f3afc9dd0d134d4b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1699401331506974496,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6904e78a09866b78c3b1f571065eeab9,},Annotations:map[string]string
{io.kubernetes.container.hash: 29fe179,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa0525b8638cd679e8ea1d343306fd62ca14eabb1de5039bdd424465f0bf61d,PodSandboxId:6efc89485ca44c5a791f09aa69d06cb7ea50727bd58a44007f003f8062f6c37a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1699401331373716088,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a77fcc0af92f54ae4968e49945b08d0b,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8540ddd9421b097f89c4cb94fa24bc44622ecc9ad480db52ff7e325807eb7665,PodSandboxId:9196aed0055510a6ee6764c2c791fbf5f06bfb27fd64917096c0b33eafecf493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1699401331160353650,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-197747,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae090c22ab25992f4c2057e94142b7ee,},Annotations:map[string
]string{io.kubernetes.container.hash: 5a36d78b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f6c88d64-3e2a-4b7c-9b06-360985452d52 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9fb49fe48cbbe       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   11 seconds ago      Running             coredns                   1                   ecc096537b692       coredns-6d4b75cb6d-wcx69
	780e417ae969c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   f77c1afdb5fa5       storage-provisioner
	2e406b7ce1bc8       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   eccd1530bc692       kube-proxy-4llrf
	3a7af3748cfb1       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   22 seconds ago      Running             kube-scheduler            1                   9683859f90ca6       kube-scheduler-test-preload-197747
	7aebd8598b32d       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   22 seconds ago      Running             etcd                      1                   ad61e703f0ee5       etcd-test-preload-197747
	cfa0525b8638c       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   6efc89485ca44       kube-controller-manager-test-preload-197747
	8540ddd9421b0       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   9196aed005551       kube-apiserver-test-preload-197747
	
	* 
	* ==> coredns [9fb49fe48cbbeafb21a04b6ca82b4cc7580583e0b572f839198e4dcef0b00fb1] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:32790 - 6871 "HINFO IN 6585885912035083634.6066055876830893573. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01409914s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-197747
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-197747
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=test-preload-197747
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_54_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:53:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-197747
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:55:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:55:48 +0000   Tue, 07 Nov 2023 23:53:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:55:48 +0000   Tue, 07 Nov 2023 23:53:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:55:48 +0000   Tue, 07 Nov 2023 23:53:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:55:48 +0000   Tue, 07 Nov 2023 23:55:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.173
	  Hostname:    test-preload-197747
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 effa0543149e4604a286c062f11d1c76
	  System UUID:                effa0543-149e-4604-a286-c062f11d1c76
	  Boot ID:                    2a26ca96-0719-4b59-b41b-4ee78aa04abd
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-wcx69                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     99s
	  kube-system                 etcd-test-preload-197747                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kube-apiserver-test-preload-197747             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-controller-manager-test-preload-197747    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-proxy-4llrf                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-scheduler-test-preload-197747             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 95s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s (x5 over 2m1s)  kubelet          Node test-preload-197747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x5 over 2m1s)  kubelet          Node test-preload-197747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x4 over 2m1s)  kubelet          Node test-preload-197747 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node test-preload-197747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node test-preload-197747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node test-preload-197747 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                101s                 kubelet          Node test-preload-197747 status is now: NodeReady
	  Normal  RegisteredNode           100s                 node-controller  Node test-preload-197747 event: Registered Node test-preload-197747 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node test-preload-197747 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node test-preload-197747 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node test-preload-197747 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-197747 event: Registered Node test-preload-197747 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 7 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065300] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.354781] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.408061] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147451] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.512415] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000067] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov 7 23:55] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.110943] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.147107] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.109838] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.220124] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +24.708053] systemd-fstab-generator[1096]: Ignoring "noauto" for root device
	[ +10.364788] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.043424] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [7aebd8598b32d4c2c2e464d0a24810f924d5c229715d208eaf3792ca9d41d9e4] <==
	* {"level":"info","ts":"2023-11-07T23:55:33.331Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"db356cbc19811e0e","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-11-07T23:55:33.332Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-11-07T23:55:33.333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e switched to configuration voters=(15795650823209426446)"}
	{"level":"info","ts":"2023-11-07T23:55:33.333Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a25ac6d8ed10a2a9","local-member-id":"db356cbc19811e0e","added-peer-id":"db356cbc19811e0e","added-peer-peer-urls":["https://192.168.39.173:2380"]}
	{"level":"info","ts":"2023-11-07T23:55:33.333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a25ac6d8ed10a2a9","local-member-id":"db356cbc19811e0e","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:55:33.333Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-07T23:55:33.333Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:55:33.333Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"db356cbc19811e0e","initial-advertise-peer-urls":["https://192.168.39.173:2380"],"listen-peer-urls":["https://192.168.39.173:2380"],"advertise-client-urls":["https://192.168.39.173:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.173:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-07T23:55:33.333Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-07T23:55:33.333Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.173:2380"}
	{"level":"info","ts":"2023-11-07T23:55:33.333Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.173:2380"}
	{"level":"info","ts":"2023-11-07T23:55:35.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-07T23:55:35.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-07T23:55:35.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e received MsgPreVoteResp from db356cbc19811e0e at term 2"}
	{"level":"info","ts":"2023-11-07T23:55:35.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became candidate at term 3"}
	{"level":"info","ts":"2023-11-07T23:55:35.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e received MsgVoteResp from db356cbc19811e0e at term 3"}
	{"level":"info","ts":"2023-11-07T23:55:35.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db356cbc19811e0e became leader at term 3"}
	{"level":"info","ts":"2023-11-07T23:55:35.217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db356cbc19811e0e elected leader db356cbc19811e0e at term 3"}
	{"level":"info","ts":"2023-11-07T23:55:35.218Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"db356cbc19811e0e","local-member-attributes":"{Name:test-preload-197747 ClientURLs:[https://192.168.39.173:2379]}","request-path":"/0/members/db356cbc19811e0e/attributes","cluster-id":"a25ac6d8ed10a2a9","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-07T23:55:35.218Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:55:35.220Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.173:2379"}
	{"level":"info","ts":"2023-11-07T23:55:35.220Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:55:35.220Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-07T23:55:35.220Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-07T23:55:35.221Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  23:55:54 up 1 min,  0 users,  load average: 1.09, 0.34, 0.12
	Linux test-preload-197747 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8540ddd9421b097f89c4cb94fa24bc44622ecc9ad480db52ff7e325807eb7665] <==
	* I1107 23:55:37.762181       1 controller.go:85] Starting OpenAPI controller
	I1107 23:55:37.762289       1 controller.go:85] Starting OpenAPI V3 controller
	I1107 23:55:37.762340       1 naming_controller.go:291] Starting NamingConditionController
	I1107 23:55:37.762771       1 establishing_controller.go:76] Starting EstablishingController
	I1107 23:55:37.762825       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1107 23:55:37.762848       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1107 23:55:37.762897       1 crd_finalizer.go:266] Starting CRDFinalizer
	E1107 23:55:37.771258       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1107 23:55:37.783746       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:55:37.796600       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1107 23:55:37.802657       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 23:55:37.837073       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:55:37.878030       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 23:55:37.878628       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1107 23:55:37.879080       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1107 23:55:38.335850       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 23:55:38.687233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 23:55:39.526813       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1107 23:55:39.553349       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1107 23:55:39.597391       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1107 23:55:39.625416       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:55:39.636607       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 23:55:40.223714       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1107 23:55:50.255618       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 23:55:50.508250       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [cfa0525b8638cd679e8ea1d343306fd62ca14eabb1de5039bdd424465f0bf61d] <==
	* I1107 23:55:50.268049       1 range_allocator.go:173] Starting range CIDR allocator
	I1107 23:55:50.268060       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1107 23:55:50.268068       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1107 23:55:50.268169       1 shared_informer.go:262] Caches are synced for endpoint
	I1107 23:55:50.269898       1 shared_informer.go:262] Caches are synced for persistent volume
	I1107 23:55:50.272259       1 shared_informer.go:262] Caches are synced for PVC protection
	I1107 23:55:50.280123       1 shared_informer.go:262] Caches are synced for deployment
	I1107 23:55:50.286002       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1107 23:55:50.299632       1 shared_informer.go:262] Caches are synced for taint
	I1107 23:55:50.299921       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1107 23:55:50.300491       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-197747. Assuming now as a timestamp.
	I1107 23:55:50.300077       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1107 23:55:50.300145       1 event.go:294] "Event occurred" object="test-preload-197747" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-197747 event: Registered Node test-preload-197747 in Controller"
	I1107 23:55:50.302441       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1107 23:55:50.305312       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1107 23:55:50.312481       1 shared_informer.go:262] Caches are synced for daemon sets
	I1107 23:55:50.322020       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1107 23:55:50.383969       1 shared_informer.go:262] Caches are synced for disruption
	I1107 23:55:50.383986       1 disruption.go:371] Sending events to api server.
	I1107 23:55:50.414658       1 shared_informer.go:262] Caches are synced for stateful set
	I1107 23:55:50.427173       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 23:55:50.457684       1 shared_informer.go:262] Caches are synced for resource quota
	I1107 23:55:50.878842       1 shared_informer.go:262] Caches are synced for garbage collector
	I1107 23:55:50.878932       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 23:55:50.920121       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [2e406b7ce1bc831008515a8296979ee9f5b9f36636368a0876a03b23d8f07850] <==
	* I1107 23:55:40.097773       1 node.go:163] Successfully retrieved node IP: 192.168.39.173
	I1107 23:55:40.108445       1 server_others.go:138] "Detected node IP" address="192.168.39.173"
	I1107 23:55:40.108690       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1107 23:55:40.202715       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1107 23:55:40.202795       1 server_others.go:206] "Using iptables Proxier"
	I1107 23:55:40.202829       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1107 23:55:40.203105       1 server.go:661] "Version info" version="v1.24.4"
	I1107 23:55:40.203232       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:55:40.205112       1 config.go:317] "Starting service config controller"
	I1107 23:55:40.205344       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1107 23:55:40.205397       1 config.go:226] "Starting endpoint slice config controller"
	I1107 23:55:40.205403       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1107 23:55:40.210898       1 config.go:444] "Starting node config controller"
	I1107 23:55:40.211148       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1107 23:55:40.305477       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1107 23:55:40.305645       1 shared_informer.go:262] Caches are synced for service config
	I1107 23:55:40.312634       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3a7af3748cfb1b8ae0d1685a9524645f258749c819d32d2971624206619d6a56] <==
	* I1107 23:55:33.564807       1 serving.go:348] Generated self-signed cert in-memory
	W1107 23:55:37.787088       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1107 23:55:37.790445       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1107 23:55:37.790593       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 23:55:37.790603       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 23:55:37.818478       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1107 23:55:37.818572       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:55:37.824992       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1107 23:55:37.825293       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 23:55:37.825396       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1107 23:55:37.825593       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1107 23:55:37.926126       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-07 23:54:55 UTC, ends at Tue 2023-11-07 23:55:54 UTC. --
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.116393    1102 topology_manager.go:200] "Topology Admit Handler"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.282186    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48e939c1-85ba-4d4d-b12d-e285906a9dbf-config-volume\") pod \"coredns-6d4b75cb6d-wcx69\" (UID: \"48e939c1-85ba-4d4d-b12d-e285906a9dbf\") " pod="kube-system/coredns-6d4b75cb6d-wcx69"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.282256    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82563fb2-5baa-47f1-8bde-5e606e315396-lib-modules\") pod \"kube-proxy-4llrf\" (UID: \"82563fb2-5baa-47f1-8bde-5e606e315396\") " pod="kube-system/kube-proxy-4llrf"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.282284    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqh8\" (UniqueName: \"kubernetes.io/projected/82563fb2-5baa-47f1-8bde-5e606e315396-kube-api-access-9rqh8\") pod \"kube-proxy-4llrf\" (UID: \"82563fb2-5baa-47f1-8bde-5e606e315396\") " pod="kube-system/kube-proxy-4llrf"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.282307    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82563fb2-5baa-47f1-8bde-5e606e315396-kube-proxy\") pod \"kube-proxy-4llrf\" (UID: \"82563fb2-5baa-47f1-8bde-5e606e315396\") " pod="kube-system/kube-proxy-4llrf"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.282327    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82563fb2-5baa-47f1-8bde-5e606e315396-xtables-lock\") pod \"kube-proxy-4llrf\" (UID: \"82563fb2-5baa-47f1-8bde-5e606e315396\") " pod="kube-system/kube-proxy-4llrf"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.282345    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxf2h\" (UniqueName: \"kubernetes.io/projected/5eff1c4e-08ce-42a5-a739-35670b1bd74d-kube-api-access-mxf2h\") pod \"storage-provisioner\" (UID: \"5eff1c4e-08ce-42a5-a739-35670b1bd74d\") " pod="kube-system/storage-provisioner"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.282364    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg5m9\" (UniqueName: \"kubernetes.io/projected/48e939c1-85ba-4d4d-b12d-e285906a9dbf-kube-api-access-vg5m9\") pod \"coredns-6d4b75cb6d-wcx69\" (UID: \"48e939c1-85ba-4d4d-b12d-e285906a9dbf\") " pod="kube-system/coredns-6d4b75cb6d-wcx69"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.282381    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5eff1c4e-08ce-42a5-a739-35670b1bd74d-tmp\") pod \"storage-provisioner\" (UID: \"5eff1c4e-08ce-42a5-a739-35670b1bd74d\") " pod="kube-system/storage-provisioner"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.282394    1102 reconciler.go:159] "Reconciler: start to sync state"
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.640427    1102 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1983145-4001-42a7-b847-028bae4268c6-config-volume\") pod \"d1983145-4001-42a7-b847-028bae4268c6\" (UID: \"d1983145-4001-42a7-b847-028bae4268c6\") "
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.640467    1102 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67fpc\" (UniqueName: \"kubernetes.io/projected/d1983145-4001-42a7-b847-028bae4268c6-kube-api-access-67fpc\") pod \"d1983145-4001-42a7-b847-028bae4268c6\" (UID: \"d1983145-4001-42a7-b847-028bae4268c6\") "
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: E1107 23:55:38.641621    1102 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: E1107 23:55:38.641691    1102 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/48e939c1-85ba-4d4d-b12d-e285906a9dbf-config-volume podName:48e939c1-85ba-4d4d-b12d-e285906a9dbf nodeName:}" failed. No retries permitted until 2023-11-07 23:55:39.141665769 +0000 UTC m=+9.190710885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/48e939c1-85ba-4d4d-b12d-e285906a9dbf-config-volume") pod "coredns-6d4b75cb6d-wcx69" (UID: "48e939c1-85ba-4d4d-b12d-e285906a9dbf") : object "kube-system"/"coredns" not registered
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: W1107 23:55:38.642368    1102 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/d1983145-4001-42a7-b847-028bae4268c6/volumes/kubernetes.io~projected/kube-api-access-67fpc: clearQuota called, but quotas disabled
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: W1107 23:55:38.642671    1102 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/d1983145-4001-42a7-b847-028bae4268c6/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.642765    1102 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1983145-4001-42a7-b847-028bae4268c6-kube-api-access-67fpc" (OuterVolumeSpecName: "kube-api-access-67fpc") pod "d1983145-4001-42a7-b847-028bae4268c6" (UID: "d1983145-4001-42a7-b847-028bae4268c6"). InnerVolumeSpecName "kube-api-access-67fpc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.643297    1102 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1983145-4001-42a7-b847-028bae4268c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "d1983145-4001-42a7-b847-028bae4268c6" (UID: "d1983145-4001-42a7-b847-028bae4268c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.741397    1102 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1983145-4001-42a7-b847-028bae4268c6-config-volume\") on node \"test-preload-197747\" DevicePath \"\""
	Nov 07 23:55:38 test-preload-197747 kubelet[1102]: I1107 23:55:38.741426    1102 reconciler.go:384] "Volume detached for volume \"kube-api-access-67fpc\" (UniqueName: \"kubernetes.io/projected/d1983145-4001-42a7-b847-028bae4268c6-kube-api-access-67fpc\") on node \"test-preload-197747\" DevicePath \"\""
	Nov 07 23:55:39 test-preload-197747 kubelet[1102]: E1107 23:55:39.145246    1102 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 07 23:55:39 test-preload-197747 kubelet[1102]: E1107 23:55:39.145306    1102 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/48e939c1-85ba-4d4d-b12d-e285906a9dbf-config-volume podName:48e939c1-85ba-4d4d-b12d-e285906a9dbf nodeName:}" failed. No retries permitted until 2023-11-07 23:55:40.145292989 +0000 UTC m=+10.194338104 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/48e939c1-85ba-4d4d-b12d-e285906a9dbf-config-volume") pod "coredns-6d4b75cb6d-wcx69" (UID: "48e939c1-85ba-4d4d-b12d-e285906a9dbf") : object "kube-system"/"coredns" not registered
	Nov 07 23:55:40 test-preload-197747 kubelet[1102]: E1107 23:55:40.149475    1102 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 07 23:55:40 test-preload-197747 kubelet[1102]: E1107 23:55:40.149641    1102 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/48e939c1-85ba-4d4d-b12d-e285906a9dbf-config-volume podName:48e939c1-85ba-4d4d-b12d-e285906a9dbf nodeName:}" failed. No retries permitted until 2023-11-07 23:55:42.149624013 +0000 UTC m=+12.198669118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/48e939c1-85ba-4d4d-b12d-e285906a9dbf-config-volume") pod "coredns-6d4b75cb6d-wcx69" (UID: "48e939c1-85ba-4d4d-b12d-e285906a9dbf") : object "kube-system"/"coredns" not registered
	Nov 07 23:55:42 test-preload-197747 kubelet[1102]: I1107 23:55:42.211203    1102 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d1983145-4001-42a7-b847-028bae4268c6 path="/var/lib/kubelet/pods/d1983145-4001-42a7-b847-028bae4268c6/volumes"
	
	* 
	* ==> storage-provisioner [780e417ae969cd6eec2d541873f8af094abaea1270f0d43dadbd0eace7082fc4] <==
	* I1107 23:55:40.264401       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-197747 -n test-preload-197747
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-197747 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-197747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-197747
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-197747: (1.104347846s)
--- FAIL: TestPreload (250.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (177.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.324126302.exe start -p running-upgrade-802871 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1107 23:58:53.871912   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.324126302.exe start -p running-upgrade-802871 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m19.800253887s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-802871 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-802871 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (32.818945292s)

                                                
                                                
-- stdout --
	* [running-upgrade-802871] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-802871 in cluster running-upgrade-802871
	* Updating the running kvm2 "running-upgrade-802871" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:00:18.692690   41717 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:00:18.693000   41717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:00:18.693011   41717 out.go:309] Setting ErrFile to fd 2...
	I1108 00:00:18.693016   41717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:00:18.693209   41717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:00:18.693733   41717 out.go:303] Setting JSON to false
	I1108 00:00:18.694626   41717 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6168,"bootTime":1699395451,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:00:18.694683   41717 start.go:138] virtualization: kvm guest
	I1108 00:00:18.696988   41717 out.go:177] * [running-upgrade-802871] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:00:18.698640   41717 notify.go:220] Checking for updates...
	I1108 00:00:18.698648   41717 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:00:18.700282   41717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:00:18.701973   41717 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:00:18.703405   41717 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:00:18.704857   41717 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:00:18.706304   41717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:00:18.708167   41717 config.go:182] Loaded profile config "running-upgrade-802871": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1108 00:00:18.708180   41717 start_flags.go:694] config upgrade: Driver=kvm2
	I1108 00:00:18.708188   41717 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1108 00:00:18.708248   41717 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/running-upgrade-802871/config.json ...
	I1108 00:00:18.708869   41717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:00:18.708929   41717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:00:18.723088   41717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I1108 00:00:18.723565   41717 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:00:18.724159   41717 main.go:141] libmachine: Using API Version  1
	I1108 00:00:18.724196   41717 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:00:18.724531   41717 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:00:18.724743   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .DriverName
	I1108 00:00:18.726864   41717 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1108 00:00:18.728367   41717 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:00:18.728696   41717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:00:18.728730   41717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:00:18.742900   41717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42525
	I1108 00:00:18.743271   41717 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:00:18.743764   41717 main.go:141] libmachine: Using API Version  1
	I1108 00:00:18.743802   41717 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:00:18.744189   41717 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:00:18.744380   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .DriverName
	I1108 00:00:18.778883   41717 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:00:18.780412   41717 start.go:298] selected driver: kvm2
	I1108 00:00:18.780428   41717 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-802871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.63 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoP
auseInterval:0s GPUs:}
	I1108 00:00:18.780512   41717 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:00:18.781222   41717 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:18.781298   41717 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:00:18.796013   41717 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:00:18.796395   41717 cni.go:84] Creating CNI manager for ""
	I1108 00:00:18.796416   41717 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1108 00:00:18.796428   41717 start_flags.go:323] config:
	{Name:running-upgrade-802871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.63 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1108 00:00:18.796591   41717 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:18.798400   41717 out.go:177] * Starting control plane node running-upgrade-802871 in cluster running-upgrade-802871
	I1108 00:00:18.799735   41717 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1108 00:00:19.001042   41717 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1108 00:00:19.001176   41717 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/running-upgrade-802871/config.json ...
	I1108 00:00:19.001278   41717 cache.go:107] acquiring lock: {Name:mk594dd7549e60e18d8bd4293c5811a96eeb191f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:19.001314   41717 cache.go:107] acquiring lock: {Name:mkdbddaea61325c36bdd1deac0add3efa1ed6c67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:19.001327   41717 cache.go:107] acquiring lock: {Name:mk5637421e24801b5bf8ad3ca48d00e3f68a1b01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:19.001433   41717 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1108 00:00:19.001462   41717 cache.go:107] acquiring lock: {Name:mk26d4ab1ebf0433db88303c7e5eca95c08d7379 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:19.001473   41717 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1108 00:00:19.001505   41717 start.go:365] acquiring machines lock for running-upgrade-802871: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:00:19.001541   41717 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1108 00:00:19.001278   41717 cache.go:107] acquiring lock: {Name:mk8b5344aa14cfab6603c16267abbee9b90b28bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:19.001656   41717 cache.go:107] acquiring lock: {Name:mkc8d68bbf6cc058924e812306c43f81f262db6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:19.001433   41717 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1108 00:00:19.001738   41717 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1108 00:00:19.001748   41717 cache.go:107] acquiring lock: {Name:mkf3ec2311550e530c52ff03f93128f88da2978d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:19.001870   41717 cache.go:107] acquiring lock: {Name:mk7e8e34bf28915c1bfc670e8841f663a70cb5e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:19.001925   41717 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1108 00:00:19.001963   41717 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1108 00:00:19.002314   41717 cache.go:115] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 00:00:19.002340   41717 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.07424ms
	I1108 00:00:19.002356   41717 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 00:00:19.002860   41717 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1108 00:00:19.003124   41717 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1108 00:00:19.003153   41717 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1108 00:00:19.003178   41717 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1108 00:00:19.003208   41717 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1108 00:00:19.003378   41717 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1108 00:00:19.003980   41717 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1108 00:00:19.275059   41717 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1108 00:00:19.283355   41717 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1108 00:00:19.309209   41717 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1108 00:00:19.313424   41717 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1108 00:00:19.320592   41717 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1108 00:00:19.328511   41717 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1108 00:00:19.368842   41717 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1108 00:00:19.476759   41717 cache.go:157] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1108 00:00:19.476794   41717 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 475.141488ms
	I1108 00:00:19.476859   41717 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1108 00:00:19.776839   41717 cache.go:157] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1108 00:00:19.776865   41717 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 775.551529ms
	I1108 00:00:19.776880   41717 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1108 00:00:20.459848   41717 cache.go:157] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1108 00:00:20.459877   41717 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.458415909s
	I1108 00:00:20.459893   41717 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1108 00:00:20.485162   41717 cache.go:157] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1108 00:00:20.485197   41717 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.483472069s
	I1108 00:00:20.485215   41717 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1108 00:00:20.575966   41717 cache.go:157] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1108 00:00:20.575997   41717 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.574131106s
	I1108 00:00:20.576011   41717 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1108 00:00:20.826297   41717 cache.go:157] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1108 00:00:20.826332   41717 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.825028475s
	I1108 00:00:20.826348   41717 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1108 00:00:21.019599   41717 cache.go:157] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1108 00:00:21.019630   41717 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.018357341s
	I1108 00:00:21.019652   41717 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1108 00:00:21.019671   41717 cache.go:87] Successfully saved all images to host disk.
	I1108 00:00:47.386055   41717 start.go:369] acquired machines lock for "running-upgrade-802871" in 28.384502793s
	I1108 00:00:47.386102   41717 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:00:47.386110   41717 fix.go:54] fixHost starting: minikube
	I1108 00:00:47.386507   41717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:00:47.386557   41717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:00:47.405617   41717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45051
	I1108 00:00:47.406041   41717 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:00:47.406509   41717 main.go:141] libmachine: Using API Version  1
	I1108 00:00:47.406533   41717 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:00:47.406917   41717 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:00:47.407122   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .DriverName
	I1108 00:00:47.407266   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetState
	I1108 00:00:47.408883   41717 fix.go:102] recreateIfNeeded on running-upgrade-802871: state=Running err=<nil>
	W1108 00:00:47.408920   41717 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:00:47.410833   41717 out.go:177] * Updating the running kvm2 "running-upgrade-802871" VM ...
	I1108 00:00:47.412308   41717 machine.go:88] provisioning docker machine ...
	I1108 00:00:47.412332   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .DriverName
	I1108 00:00:47.412576   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetMachineName
	I1108 00:00:47.412742   41717 buildroot.go:166] provisioning hostname "running-upgrade-802871"
	I1108 00:00:47.412759   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetMachineName
	I1108 00:00:47.412923   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHHostname
	I1108 00:00:47.415933   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.416264   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:47.416309   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.416412   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHPort
	I1108 00:00:47.416586   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:47.416733   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:47.416885   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHUsername
	I1108 00:00:47.417112   41717 main.go:141] libmachine: Using SSH client type: native
	I1108 00:00:47.417594   41717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I1108 00:00:47.417612   41717 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-802871 && echo "running-upgrade-802871" | sudo tee /etc/hostname
	I1108 00:00:47.571660   41717 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-802871
	
	I1108 00:00:47.571702   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHHostname
	I1108 00:00:47.574985   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.575333   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:47.575377   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.575539   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHPort
	I1108 00:00:47.575746   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:47.575952   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:47.576312   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHUsername
	I1108 00:00:47.576502   41717 main.go:141] libmachine: Using SSH client type: native
	I1108 00:00:47.577006   41717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I1108 00:00:47.577035   41717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-802871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-802871/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-802871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:00:47.710190   41717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:00:47.710220   41717 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:00:47.710246   41717 buildroot.go:174] setting up certificates
	I1108 00:00:47.710259   41717 provision.go:83] configureAuth start
	I1108 00:00:47.710272   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetMachineName
	I1108 00:00:47.710529   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetIP
	I1108 00:00:47.713738   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.714125   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:47.714167   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.714338   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHHostname
	I1108 00:00:47.717131   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.717517   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:47.717548   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.717801   41717 provision.go:138] copyHostCerts
	I1108 00:00:47.717857   41717 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:00:47.717870   41717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:00:47.717937   41717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:00:47.718067   41717 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:00:47.718084   41717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:00:47.718122   41717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:00:47.718219   41717 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:00:47.718231   41717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:00:47.718258   41717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:00:47.718321   41717 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-802871 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube running-upgrade-802871]
	I1108 00:00:47.857656   41717 provision.go:172] copyRemoteCerts
	I1108 00:00:47.857709   41717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:00:47.857729   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHHostname
	I1108 00:00:47.860665   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.861007   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:47.861031   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:47.861223   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHPort
	I1108 00:00:47.861438   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:47.861601   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHUsername
	I1108 00:00:47.861766   41717 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/running-upgrade-802871/id_rsa Username:docker}
	I1108 00:00:47.963300   41717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:00:47.979009   41717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:00:47.995224   41717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:00:48.010096   41717 provision.go:86] duration metric: configureAuth took 299.823511ms
	I1108 00:00:48.010128   41717 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:00:48.010306   41717 config.go:182] Loaded profile config "running-upgrade-802871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1108 00:00:48.010383   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHHostname
	I1108 00:00:48.013217   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:48.013678   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:48.013705   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:48.013867   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHPort
	I1108 00:00:48.014021   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:48.014210   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:48.014415   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHUsername
	I1108 00:00:48.014613   41717 main.go:141] libmachine: Using SSH client type: native
	I1108 00:00:48.015148   41717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I1108 00:00:48.015179   41717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:00:48.842171   41717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:00:48.842192   41717 machine.go:91] provisioned docker machine in 1.429869021s
	I1108 00:00:48.842201   41717 start.go:300] post-start starting for "running-upgrade-802871" (driver="kvm2")
	I1108 00:00:48.842210   41717 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:00:48.842234   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .DriverName
	I1108 00:00:48.842522   41717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:00:48.842560   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHHostname
	I1108 00:00:48.845553   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:48.845909   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:48.845960   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:48.846099   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHPort
	I1108 00:00:48.846289   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:48.846405   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHUsername
	I1108 00:00:48.846579   41717 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/running-upgrade-802871/id_rsa Username:docker}
	I1108 00:00:48.948522   41717 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:00:48.954188   41717 info.go:137] Remote host: Buildroot 2019.02.7
	I1108 00:00:48.954214   41717 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:00:48.954293   41717 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:00:48.954384   41717 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:00:48.954490   41717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:00:48.962145   41717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:00:48.982956   41717 start.go:303] post-start completed in 140.738263ms
	I1108 00:00:48.982977   41717 fix.go:56] fixHost completed within 1.596867592s
	I1108 00:00:48.983000   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHHostname
	I1108 00:00:48.985852   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:48.986247   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:48.986284   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:48.986422   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHPort
	I1108 00:00:48.986622   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:48.986796   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:48.986939   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHUsername
	I1108 00:00:48.987121   41717 main.go:141] libmachine: Using SSH client type: native
	I1108 00:00:48.987456   41717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I1108 00:00:48.987472   41717 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1108 00:00:49.134304   41717 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699401649.131628678
	
	I1108 00:00:49.134330   41717 fix.go:206] guest clock: 1699401649.131628678
	I1108 00:00:49.134340   41717 fix.go:219] Guest: 2023-11-08 00:00:49.131628678 +0000 UTC Remote: 2023-11-08 00:00:48.982981346 +0000 UTC m=+30.342993555 (delta=148.647332ms)
	I1108 00:00:49.134386   41717 fix.go:190] guest clock delta is within tolerance: 148.647332ms
	I1108 00:00:49.134394   41717 start.go:83] releasing machines lock for "running-upgrade-802871", held for 1.748308481s
	I1108 00:00:49.134428   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .DriverName
	I1108 00:00:49.134692   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetIP
	I1108 00:00:49.137893   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:49.138351   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:49.138385   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:49.138851   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .DriverName
	I1108 00:00:49.139376   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .DriverName
	I1108 00:00:49.139535   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .DriverName
	I1108 00:00:49.139601   41717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:00:49.139642   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHHostname
	I1108 00:00:49.139978   41717 ssh_runner.go:195] Run: cat /version.json
	I1108 00:00:49.140000   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHHostname
	I1108 00:00:49.142744   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:49.143055   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:49.143091   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:49.143118   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:49.143276   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHPort
	I1108 00:00:49.143368   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:6b:b8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 00:58:34 +0000 UTC Type:0 Mac:52:54:00:29:6b:b8 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:running-upgrade-802871 Clientid:01:52:54:00:29:6b:b8}
	I1108 00:00:49.143406   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:49.143418   41717 main.go:141] libmachine: (running-upgrade-802871) DBG | domain running-upgrade-802871 has defined IP address 192.168.50.63 and MAC address 52:54:00:29:6b:b8 in network minikube-net
	I1108 00:00:49.143519   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHUsername
	I1108 00:00:49.143614   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHPort
	I1108 00:00:49.143621   41717 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/running-upgrade-802871/id_rsa Username:docker}
	I1108 00:00:49.143771   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHKeyPath
	I1108 00:00:49.143920   41717 main.go:141] libmachine: (running-upgrade-802871) Calling .GetSSHUsername
	I1108 00:00:49.144016   41717 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/running-upgrade-802871/id_rsa Username:docker}
	W1108 00:00:49.273590   41717 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1108 00:00:49.273671   41717 ssh_runner.go:195] Run: systemctl --version
	I1108 00:00:49.280179   41717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:00:49.483583   41717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:00:49.493378   41717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:00:49.493473   41717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:00:49.508882   41717 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 00:00:49.508911   41717 start.go:472] detecting cgroup driver to use...
	I1108 00:00:49.508985   41717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:00:49.524204   41717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:00:49.542565   41717 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:00:49.542626   41717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:00:49.556378   41717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:00:49.571483   41717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1108 00:00:49.587394   41717 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1108 00:00:49.587454   41717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:00:49.760418   41717 docker.go:219] disabling docker service ...
	I1108 00:00:49.760510   41717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:00:50.797066   41717 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.03652367s)
	I1108 00:00:50.797147   41717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:00:50.816028   41717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:00:50.979504   41717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:00:51.160929   41717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:00:51.170862   41717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:00:51.183098   41717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1108 00:00:51.183177   41717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:00:51.225415   41717 out.go:177] 
	W1108 00:00:51.266309   41717 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1108 00:00:51.266336   41717 out.go:239] * 
	* 
	W1108 00:00:51.267293   41717 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 00:00:51.372170   41717 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-802871 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-08 00:00:51.48497439 +0000 UTC m=+3585.648283284
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-802871 -n running-upgrade-802871
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-802871 -n running-upgrade-802871: exit status 4 (339.897725ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:00:51.763920   44921 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-802871" does not appear in /home/jenkins/minikube-integration/17585-9647/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-802871" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-802871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-802871
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-802871: (1.796124993s)
--- FAIL: TestRunningBinaryUpgrade (177.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-036330 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-036330 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.255203533s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-036330] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-036330 in cluster pause-036330
	* Updating the running kvm2 "pause-036330" VM ...
	* Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-036330" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:59:41.749946   41128 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:59:41.750256   41128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:59:41.750267   41128 out.go:309] Setting ErrFile to fd 2...
	I1107 23:59:41.750271   41128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:59:41.750483   41128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1107 23:59:41.751043   41128 out.go:303] Setting JSON to false
	I1107 23:59:41.752036   41128 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6131,"bootTime":1699395451,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:59:41.752099   41128 start.go:138] virtualization: kvm guest
	I1107 23:59:41.754370   41128 out.go:177] * [pause-036330] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:59:41.756159   41128 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:59:41.756184   41128 notify.go:220] Checking for updates...
	I1107 23:59:41.757614   41128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:59:41.758986   41128 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:59:41.760304   41128 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:59:41.761551   41128 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:59:41.762783   41128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:59:41.764492   41128 config.go:182] Loaded profile config "pause-036330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:59:41.765064   41128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:59:41.765113   41128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:59:41.781471   41128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42117
	I1107 23:59:41.782057   41128 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:59:41.782803   41128 main.go:141] libmachine: Using API Version  1
	I1107 23:59:41.782822   41128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:59:41.784001   41128 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:59:41.784194   41128 main.go:141] libmachine: (pause-036330) Calling .DriverName
	I1107 23:59:41.784477   41128 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:59:41.784766   41128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:59:41.784791   41128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:59:41.801982   41128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I1107 23:59:41.802454   41128 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:59:41.802876   41128 main.go:141] libmachine: Using API Version  1
	I1107 23:59:41.802898   41128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:59:41.803450   41128 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:59:41.803620   41128 main.go:141] libmachine: (pause-036330) Calling .DriverName
	I1107 23:59:41.845817   41128 out.go:177] * Using the kvm2 driver based on existing profile
	I1107 23:59:41.847422   41128 start.go:298] selected driver: kvm2
	I1107 23:59:41.847449   41128 start.go:902] validating driver "kvm2" against &{Name:pause-036330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-036330 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:59:41.847613   41128 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:59:41.848071   41128 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:59:41.848172   41128 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:59:41.864908   41128 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:59:41.865824   41128 cni.go:84] Creating CNI manager for ""
	I1107 23:59:41.865845   41128 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:59:41.865861   41128 start_flags.go:323] config:
	{Name:pause-036330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-036330 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false reg
istry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:59:41.866118   41128 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:59:41.868079   41128 out.go:177] * Starting control plane node pause-036330 in cluster pause-036330
	I1107 23:59:41.869488   41128 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:59:41.869534   41128 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:59:41.869546   41128 cache.go:56] Caching tarball of preloaded images
	I1107 23:59:41.869643   41128 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:59:41.869658   41128 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:59:41.869807   41128 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/config.json ...
	I1107 23:59:41.870054   41128 start.go:365] acquiring machines lock for pause-036330: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1107 23:59:41.870114   41128 start.go:369] acquired machines lock for "pause-036330" in 34.535µs
	I1107 23:59:41.870145   41128 start.go:96] Skipping create...Using existing machine configuration
	I1107 23:59:41.870151   41128 fix.go:54] fixHost starting: 
	I1107 23:59:41.870464   41128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:59:41.870507   41128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:59:41.885555   41128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I1107 23:59:41.885959   41128 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:59:41.886404   41128 main.go:141] libmachine: Using API Version  1
	I1107 23:59:41.886423   41128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:59:41.886768   41128 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:59:41.886945   41128 main.go:141] libmachine: (pause-036330) Calling .DriverName
	I1107 23:59:41.887092   41128 main.go:141] libmachine: (pause-036330) Calling .GetState
	I1107 23:59:41.888800   41128 fix.go:102] recreateIfNeeded on pause-036330: state=Running err=<nil>
	W1107 23:59:41.888837   41128 fix.go:128] unexpected machine state, will restart: <nil>
	I1107 23:59:41.890471   41128 out.go:177] * Updating the running kvm2 "pause-036330" VM ...
	I1107 23:59:41.891913   41128 machine.go:88] provisioning docker machine ...
	I1107 23:59:41.891934   41128 main.go:141] libmachine: (pause-036330) Calling .DriverName
	I1107 23:59:41.892148   41128 main.go:141] libmachine: (pause-036330) Calling .GetMachineName
	I1107 23:59:41.892291   41128 buildroot.go:166] provisioning hostname "pause-036330"
	I1107 23:59:41.892310   41128 main.go:141] libmachine: (pause-036330) Calling .GetMachineName
	I1107 23:59:41.892432   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHHostname
	I1107 23:59:41.894643   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:41.895052   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:41.895078   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:41.895203   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHPort
	I1107 23:59:41.895372   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:41.895529   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:41.895677   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHUsername
	I1107 23:59:41.895831   41128 main.go:141] libmachine: Using SSH client type: native
	I1107 23:59:41.896359   41128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1107 23:59:41.896379   41128 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-036330 && echo "pause-036330" | sudo tee /etc/hostname
	I1107 23:59:42.047525   41128 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-036330
	
	I1107 23:59:42.047556   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHHostname
	I1107 23:59:42.050798   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.051308   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:42.051334   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.051509   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHPort
	I1107 23:59:42.051721   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:42.051856   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:42.052015   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHUsername
	I1107 23:59:42.052179   41128 main.go:141] libmachine: Using SSH client type: native
	I1107 23:59:42.052642   41128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1107 23:59:42.052666   41128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-036330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-036330/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-036330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:59:42.182906   41128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:59:42.182941   41128 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1107 23:59:42.182977   41128 buildroot.go:174] setting up certificates
	I1107 23:59:42.182986   41128 provision.go:83] configureAuth start
	I1107 23:59:42.182997   41128 main.go:141] libmachine: (pause-036330) Calling .GetMachineName
	I1107 23:59:42.183282   41128 main.go:141] libmachine: (pause-036330) Calling .GetIP
	I1107 23:59:42.186282   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.186683   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:42.186720   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.187000   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHHostname
	I1107 23:59:42.190017   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.190460   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:42.190511   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.190655   41128 provision.go:138] copyHostCerts
	I1107 23:59:42.190706   41128 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1107 23:59:42.190729   41128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1107 23:59:42.190788   41128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1107 23:59:42.190990   41128 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1107 23:59:42.191001   41128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1107 23:59:42.191034   41128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1107 23:59:42.191147   41128 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1107 23:59:42.191158   41128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1107 23:59:42.191185   41128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1107 23:59:42.191271   41128 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.pause-036330 san=[192.168.39.61 192.168.39.61 localhost 127.0.0.1 minikube pause-036330]
	I1107 23:59:42.353020   41128 provision.go:172] copyRemoteCerts
	I1107 23:59:42.353072   41128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:59:42.353093   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHHostname
	I1107 23:59:42.355766   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.356135   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:42.356156   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.356355   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHPort
	I1107 23:59:42.356527   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:42.356692   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHUsername
	I1107 23:59:42.356842   41128 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/pause-036330/id_rsa Username:docker}
	I1107 23:59:42.451790   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:59:42.477147   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 23:59:42.501393   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:59:42.545666   41128 provision.go:86] duration metric: configureAuth took 362.669809ms
	I1107 23:59:42.545690   41128 buildroot.go:189] setting minikube options for container-runtime
	I1107 23:59:42.545929   41128 config.go:182] Loaded profile config "pause-036330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:59:42.546014   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHHostname
	I1107 23:59:42.548517   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.548844   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:42.548874   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:42.549043   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHPort
	I1107 23:59:42.549241   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:42.549401   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:42.549575   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHUsername
	I1107 23:59:42.549734   41128 main.go:141] libmachine: Using SSH client type: native
	I1107 23:59:42.550061   41128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1107 23:59:42.550084   41128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:59:48.185018   41128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:59:48.185043   41128 machine.go:91] provisioned docker machine in 6.293113379s
	I1107 23:59:48.185056   41128 start.go:300] post-start starting for "pause-036330" (driver="kvm2")
	I1107 23:59:48.185068   41128 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:59:48.185089   41128 main.go:141] libmachine: (pause-036330) Calling .DriverName
	I1107 23:59:48.185402   41128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:59:48.185434   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHHostname
	I1107 23:59:48.188672   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.189049   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:48.189073   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.189300   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHPort
	I1107 23:59:48.189493   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:48.189645   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHUsername
	I1107 23:59:48.189796   41128 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/pause-036330/id_rsa Username:docker}
	I1107 23:59:48.294203   41128 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:59:48.299691   41128 info.go:137] Remote host: Buildroot 2021.02.12
	I1107 23:59:48.299714   41128 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1107 23:59:48.299784   41128 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1107 23:59:48.299888   41128 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1107 23:59:48.299999   41128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:59:48.310424   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:59:48.334668   41128 start.go:303] post-start completed in 149.597171ms
	I1107 23:59:48.334698   41128 fix.go:56] fixHost completed within 6.464546313s
	I1107 23:59:48.334724   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHHostname
	I1107 23:59:48.337678   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.338073   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:48.338107   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.338265   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHPort
	I1107 23:59:48.338485   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:48.338663   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:48.338816   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHUsername
	I1107 23:59:48.338973   41128 main.go:141] libmachine: Using SSH client type: native
	I1107 23:59:48.339307   41128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I1107 23:59:48.339324   41128 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1107 23:59:48.469400   41128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699401588.465222032
	
	I1107 23:59:48.469427   41128 fix.go:206] guest clock: 1699401588.465222032
	I1107 23:59:48.469434   41128 fix.go:219] Guest: 2023-11-07 23:59:48.465222032 +0000 UTC Remote: 2023-11-07 23:59:48.334702492 +0000 UTC m=+6.642751046 (delta=130.51954ms)
	I1107 23:59:48.469484   41128 fix.go:190] guest clock delta is within tolerance: 130.51954ms
	I1107 23:59:48.469493   41128 start.go:83] releasing machines lock for "pause-036330", held for 6.599366441s
	I1107 23:59:48.469526   41128 main.go:141] libmachine: (pause-036330) Calling .DriverName
	I1107 23:59:48.469777   41128 main.go:141] libmachine: (pause-036330) Calling .GetIP
	I1107 23:59:48.472565   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.472982   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:48.473015   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.473157   41128 main.go:141] libmachine: (pause-036330) Calling .DriverName
	I1107 23:59:48.473629   41128 main.go:141] libmachine: (pause-036330) Calling .DriverName
	I1107 23:59:48.473765   41128 main.go:141] libmachine: (pause-036330) Calling .DriverName
	I1107 23:59:48.473833   41128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:59:48.473875   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHHostname
	I1107 23:59:48.473995   41128 ssh_runner.go:195] Run: cat /version.json
	I1107 23:59:48.474023   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHHostname
	I1107 23:59:48.477232   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.477412   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.477624   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:48.477659   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.477749   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHPort
	I1107 23:59:48.477876   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:48.477912   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:48.477911   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:48.478075   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHPort
	I1107 23:59:48.478087   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHUsername
	I1107 23:59:48.478255   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHKeyPath
	I1107 23:59:48.478248   41128 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/pause-036330/id_rsa Username:docker}
	I1107 23:59:48.478423   41128 main.go:141] libmachine: (pause-036330) Calling .GetSSHUsername
	I1107 23:59:48.478578   41128 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/pause-036330/id_rsa Username:docker}
	I1107 23:59:48.566217   41128 ssh_runner.go:195] Run: systemctl --version
	I1107 23:59:48.588473   41128 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:59:49.057746   41128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1107 23:59:49.099425   41128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1107 23:59:49.099515   41128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:59:49.122341   41128 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1107 23:59:49.122375   41128 start.go:472] detecting cgroup driver to use...
	I1107 23:59:49.122442   41128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:59:49.231386   41128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:59:49.275303   41128 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:59:49.275366   41128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:59:49.310295   41128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:59:49.354017   41128 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:59:49.648244   41128 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:59:49.959426   41128 docker.go:219] disabling docker service ...
	I1107 23:59:49.959504   41128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:59:49.996721   41128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:59:50.028922   41128 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:59:50.263393   41128 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:59:50.545205   41128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:59:50.596070   41128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:59:50.665385   41128 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:59:50.665448   41128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:59:50.718671   41128 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:59:50.718754   41128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:59:50.736580   41128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:59:50.774817   41128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:59:50.795680   41128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:59:50.812199   41128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:59:50.824740   41128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:59:50.840695   41128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:59:51.125531   41128 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:59:52.501658   41128 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.376090178s)
	I1107 23:59:52.501687   41128 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:59:52.501738   41128 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:59:52.507246   41128 start.go:540] Will wait 60s for crictl version
	I1107 23:59:52.507313   41128 ssh_runner.go:195] Run: which crictl
	I1107 23:59:52.511499   41128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:59:52.560962   41128 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1107 23:59:52.561066   41128 ssh_runner.go:195] Run: crio --version
	I1107 23:59:52.612542   41128 ssh_runner.go:195] Run: crio --version
	I1107 23:59:52.669565   41128 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1107 23:59:52.671143   41128 main.go:141] libmachine: (pause-036330) Calling .GetIP
	I1107 23:59:52.674210   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:52.674631   41128 main.go:141] libmachine: (pause-036330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:02:ef", ip: ""} in network mk-pause-036330: {Iface:virbr1 ExpiryTime:2023-11-08 00:58:13 +0000 UTC Type:0 Mac:52:54:00:8b:02:ef Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-036330 Clientid:01:52:54:00:8b:02:ef}
	I1107 23:59:52.674662   41128 main.go:141] libmachine: (pause-036330) DBG | domain pause-036330 has defined IP address 192.168.39.61 and MAC address 52:54:00:8b:02:ef in network mk-pause-036330
	I1107 23:59:52.674869   41128 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1107 23:59:52.679942   41128 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:59:52.680002   41128 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:59:52.992575   41128 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:59:52.992603   41128 crio.go:415] Images already preloaded, skipping extraction
	I1107 23:59:52.992679   41128 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:59:53.173603   41128 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:59:53.173629   41128 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:59:53.173690   41128 ssh_runner.go:195] Run: crio config
	I1107 23:59:53.344847   41128 cni.go:84] Creating CNI manager for ""
	I1107 23:59:53.344874   41128 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:59:53.344901   41128 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:59:53.344931   41128 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-036330 NodeName:pause-036330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:59:53.345104   41128 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-036330"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:59:53.345193   41128 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-036330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:pause-036330 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:59:53.345256   41128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:59:53.361008   41128 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:59:53.361100   41128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:59:53.373623   41128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1107 23:59:53.394297   41128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:59:53.418386   41128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1107 23:59:53.452027   41128 ssh_runner.go:195] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I1107 23:59:53.460234   41128 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330 for IP: 192.168.39.61
	I1107 23:59:53.460273   41128 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:59:53.460421   41128 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1107 23:59:53.460470   41128 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1107 23:59:53.460567   41128 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/client.key
	I1107 23:59:53.460644   41128 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/apiserver.key.e9ce627b
	I1107 23:59:53.460694   41128 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/proxy-client.key
	I1107 23:59:53.460842   41128 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1107 23:59:53.460880   41128 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1107 23:59:53.460895   41128 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1107 23:59:53.460930   41128 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:59:53.460961   41128 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:59:53.460993   41128 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1107 23:59:53.461047   41128 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1107 23:59:53.461870   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:59:53.510829   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:59:53.565205   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:59:53.622891   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 23:59:53.658845   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:59:53.698075   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 23:59:53.736876   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:59:53.773779   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 23:59:53.824957   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:59:53.868742   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1107 23:59:53.908694   41128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1107 23:59:53.967705   41128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:59:53.996534   41128 ssh_runner.go:195] Run: openssl version
	I1107 23:59:54.007986   41128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:59:54.027211   41128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:59:54.040932   41128 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:59:54.040997   41128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:59:54.053684   41128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:59:54.071816   41128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1107 23:59:54.093004   41128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1107 23:59:54.102246   41128 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1107 23:59:54.102302   41128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1107 23:59:54.114759   41128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1107 23:59:54.139882   41128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1107 23:59:54.162065   41128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1107 23:59:54.170945   41128 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1107 23:59:54.171005   41128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1107 23:59:54.181046   41128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:59:54.195704   41128 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:59:54.207320   41128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1107 23:59:54.218544   41128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1107 23:59:54.226705   41128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1107 23:59:54.255126   41128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1107 23:59:54.272323   41128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1107 23:59:54.290221   41128 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1107 23:59:54.304780   41128 kubeadm.go:404] StartCluster: {Name:pause-036330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-036330 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fal
se portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:59:54.304943   41128 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:59:54.305001   41128 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:59:54.390972   41128 cri.go:89] found id: "413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0"
	I1107 23:59:54.390999   41128 cri.go:89] found id: "442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17"
	I1107 23:59:54.391006   41128 cri.go:89] found id: "e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a"
	I1107 23:59:54.391012   41128 cri.go:89] found id: "21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f"
	I1107 23:59:54.391023   41128 cri.go:89] found id: "eb75ab1fbcef407bacabcb2df9bf1b14fa1099fe4618d62168bdc41962b8d0ec"
	I1107 23:59:54.391032   41128 cri.go:89] found id: "9620d3d14152f22251ec1dae4a0ff006098702c9ff635b167a04c260c73aa713"
	I1107 23:59:54.391037   41128 cri.go:89] found id: ""
	I1107 23:59:54.391105   41128 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-036330 -n pause-036330
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-036330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-036330 logs -n 25: (1.381572647s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC | 07 Nov 23 23:56 UTC |
	|         | --cancel-scheduled             |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 07 Nov 23 23:57 UTC |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| delete  | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 07 Nov 23 23:57 UTC |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC |                     |
	|         | --no-kubernetes                |                          |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                          |         |         |                     |                     |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p pause-036330 --memory=2048  | pause-036330             | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 07 Nov 23 23:59 UTC |
	|         | --install-addons=false         |                          |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p offline-crio-711737         | offline-crio-711737      | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 08 Nov 23 00:00 UTC |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                          |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 07 Nov 23 23:59 UTC |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p pause-036330                | pause-036330             | jenkins | v1.32.0 | 07 Nov 23 23:59 UTC | 08 Nov 23 00:00 UTC |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:59 UTC | 07 Nov 23 23:59 UTC |
	|         | --no-kubernetes --driver=kvm2  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:59 UTC | 07 Nov 23 23:59 UTC |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:59 UTC | 08 Nov 23 00:00 UTC |
	|         | --no-kubernetes --driver=kvm2  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p offline-crio-711737         | offline-crio-711737      | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC | 08 Nov 23 00:00 UTC |
	| start   | -p force-systemd-env-420594    | force-systemd-env-420594 | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC |                     |
	|         | --memory=2048                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p running-upgrade-802871      | running-upgrade-802871   | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC |                     |
	|         | --memory=2200                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| ssh     | -p NoKubernetes-798084 sudo    | NoKubernetes-798084      | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC |                     |
	|         | systemctl is-active --quiet    |                          |         |         |                     |                     |
	|         | service kubelet                |                          |         |         |                     |                     |
	| stop    | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC | 08 Nov 23 00:00 UTC |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC |                     |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:00:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:00:26.160288   42523 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:00:26.160589   42523 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:00:26.160594   42523 out.go:309] Setting ErrFile to fd 2...
	I1108 00:00:26.160600   42523 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:00:26.160920   42523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:00:26.161588   42523 out.go:303] Setting JSON to false
	I1108 00:00:26.162828   42523 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6175,"bootTime":1699395451,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:00:26.162897   42523 start.go:138] virtualization: kvm guest
	I1108 00:00:26.166008   42523 out.go:177] * [NoKubernetes-798084] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:00:26.168134   42523 notify.go:220] Checking for updates...
	I1108 00:00:26.168154   42523 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:00:26.170154   42523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:00:26.172142   42523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:00:26.174073   42523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:00:26.175653   42523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:00:26.177201   42523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:00:26.179630   42523 config.go:182] Loaded profile config "NoKubernetes-798084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1108 00:00:26.180256   42523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:00:26.180310   42523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:00:26.195570   42523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46219
	I1108 00:00:26.196004   42523 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:00:26.196536   42523 main.go:141] libmachine: Using API Version  1
	I1108 00:00:26.196554   42523 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:00:26.197066   42523 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:00:26.197255   42523 main.go:141] libmachine: (NoKubernetes-798084) Calling .DriverName
	I1108 00:00:26.197490   42523 start.go:1772] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1108 00:00:26.197507   42523 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:00:26.197878   42523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:00:26.197916   42523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:00:26.212942   42523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35607
	I1108 00:00:26.213397   42523 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:00:26.213949   42523 main.go:141] libmachine: Using API Version  1
	I1108 00:00:26.213974   42523 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:00:26.214338   42523 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:00:26.214541   42523 main.go:141] libmachine: (NoKubernetes-798084) Calling .DriverName
	I1108 00:00:26.253208   42523 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:00:26.255589   42523 start.go:298] selected driver: kvm2
	I1108 00:00:26.255599   42523 start.go:902] validating driver "kvm2" against &{Name:NoKubernetes-798084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-
798084 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.249 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:00:26.255741   42523 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:00:26.256209   42523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:26.256290   42523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:00:26.271959   42523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:00:26.273070   42523 cni.go:84] Creating CNI manager for ""
	I1108 00:00:26.273084   42523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:00:26.273096   42523 start_flags.go:323] config:
	{Name:NoKubernetes-798084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-798084 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.249 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:00:26.273292   42523 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:26.276033   42523 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-798084
	I1108 00:00:23.170268   41128 pod_ready.go:102] pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace has status "Ready":"False"
	I1108 00:00:25.171089   41128 pod_ready.go:102] pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace has status "Ready":"False"
	I1108 00:00:26.670335   41128 pod_ready.go:92] pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:26.670361   41128 pod_ready.go:81] duration metric: took 7.523425952s waiting for pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:26.670374   41128 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:26.677889   41128 pod_ready.go:92] pod "etcd-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:26.677918   41128 pod_ready.go:81] duration metric: took 7.535478ms waiting for pod "etcd-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:26.677932   41128 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:23.970923   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:23.971364   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:23.971395   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:23.971312   41789 retry.go:31] will retry after 592.949921ms: waiting for machine to come up
	I1108 00:00:24.567677   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:24.568207   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:24.568233   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:24.568128   41789 retry.go:31] will retry after 594.84646ms: waiting for machine to come up
	I1108 00:00:25.165040   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:25.165650   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:25.165678   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:25.165557   41789 retry.go:31] will retry after 674.335799ms: waiting for machine to come up
	I1108 00:00:25.841236   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:25.841577   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:25.841611   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:25.841549   41789 retry.go:31] will retry after 765.193878ms: waiting for machine to come up
	I1108 00:00:26.607964   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:26.608361   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:26.608383   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:26.608317   41789 retry.go:31] will retry after 947.459789ms: waiting for machine to come up
	I1108 00:00:27.557841   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:27.558347   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:27.558379   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:27.558297   41789 retry.go:31] will retry after 1.727534466s: waiting for machine to come up
	I1108 00:00:26.277517   42523 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1108 00:00:26.474419   42523 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1108 00:00:26.474553   42523 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/NoKubernetes-798084/config.json ...
	I1108 00:00:26.474771   42523 start.go:365] acquiring machines lock for NoKubernetes-798084: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:00:28.700486   41128 pod_ready.go:102] pod "kube-apiserver-pause-036330" in "kube-system" namespace has status "Ready":"False"
	I1108 00:00:30.321423   41128 pod_ready.go:92] pod "kube-apiserver-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:30.321447   41128 pod_ready.go:81] duration metric: took 3.643506993s waiting for pod "kube-apiserver-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.321457   41128 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.333145   41128 pod_ready.go:92] pod "kube-controller-manager-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:30.333177   41128 pod_ready.go:81] duration metric: took 11.712574ms waiting for pod "kube-controller-manager-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.333191   41128 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cfpsq" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.348801   41128 pod_ready.go:92] pod "kube-proxy-cfpsq" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:30.348845   41128 pod_ready.go:81] duration metric: took 15.644603ms waiting for pod "kube-proxy-cfpsq" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.348860   41128 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:32.682262   41128 pod_ready.go:92] pod "kube-scheduler-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:32.682286   41128 pod_ready.go:81] duration metric: took 2.33341915s waiting for pod "kube-scheduler-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:32.682297   41128 pod_ready.go:38] duration metric: took 13.540703313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:00:32.682325   41128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:00:32.696095   41128 ops.go:34] apiserver oom_adj: -16
	I1108 00:00:32.696117   41128 kubeadm.go:640] restartCluster took 38.167523728s
	I1108 00:00:32.696127   41128 kubeadm.go:406] StartCluster complete in 38.391355453s
	I1108 00:00:32.696149   41128 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:00:32.696234   41128 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:00:32.697133   41128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:00:32.697423   41128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:00:32.697521   41128 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:00:32.699450   41128 out.go:177] * Enabled addons: 
	I1108 00:00:32.697708   41128 config.go:182] Loaded profile config "pause-036330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:00:32.698036   41128 kapi.go:59] client config for pause-036330: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 00:00:32.701051   41128 addons.go:502] enable addons completed in 3.529312ms: enabled=[]
	I1108 00:00:32.704781   41128 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-036330" context rescaled to 1 replicas
	I1108 00:00:32.704833   41128 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:00:32.706471   41128 out.go:177] * Verifying Kubernetes components...
	I1108 00:00:29.287443   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:29.287887   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:29.287908   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:29.287860   41789 retry.go:31] will retry after 1.803959238s: waiting for machine to come up
	I1108 00:00:31.093432   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:31.093832   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:31.093865   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:31.093790   41789 retry.go:31] will retry after 2.3181566s: waiting for machine to come up
	I1108 00:00:33.414142   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:33.414588   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:33.414620   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:33.414537   41789 retry.go:31] will retry after 3.40201263s: waiting for machine to come up
	I1108 00:00:32.707892   41128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:00:32.809123   41128 node_ready.go:35] waiting up to 6m0s for node "pause-036330" to be "Ready" ...
	I1108 00:00:32.809178   41128 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1108 00:00:32.814445   41128 node_ready.go:49] node "pause-036330" has status "Ready":"True"
	I1108 00:00:32.814465   41128 node_ready.go:38] duration metric: took 5.30867ms waiting for node "pause-036330" to be "Ready" ...
	I1108 00:00:32.814473   41128 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:00:32.825365   41128 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.065302   41128 pod_ready.go:92] pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:33.065345   41128 pod_ready.go:81] duration metric: took 239.956744ms waiting for pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.065360   41128 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.468246   41128 pod_ready.go:92] pod "etcd-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:33.468269   41128 pod_ready.go:81] duration metric: took 402.900635ms waiting for pod "etcd-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.468282   41128 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.865242   41128 pod_ready.go:92] pod "kube-apiserver-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:33.865269   41128 pod_ready.go:81] duration metric: took 396.97924ms waiting for pod "kube-apiserver-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.865279   41128 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:34.264684   41128 pod_ready.go:92] pod "kube-controller-manager-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:34.264712   41128 pod_ready.go:81] duration metric: took 399.425727ms waiting for pod "kube-controller-manager-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:34.264731   41128 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cfpsq" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:34.667374   41128 pod_ready.go:92] pod "kube-proxy-cfpsq" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:34.667402   41128 pod_ready.go:81] duration metric: took 402.661248ms waiting for pod "kube-proxy-cfpsq" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:34.667414   41128 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:35.065198   41128 pod_ready.go:92] pod "kube-scheduler-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:35.065220   41128 pod_ready.go:81] duration metric: took 397.799123ms waiting for pod "kube-scheduler-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:35.065229   41128 pod_ready.go:38] duration metric: took 2.250746367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:00:35.065243   41128 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:00:35.065286   41128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:00:35.082635   41128 api_server.go:72] duration metric: took 2.377771301s to wait for apiserver process to appear ...
	I1108 00:00:35.082656   41128 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:00:35.082671   41128 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I1108 00:00:35.088478   41128 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I1108 00:00:35.089772   41128 api_server.go:141] control plane version: v1.28.3
	I1108 00:00:35.089795   41128 api_server.go:131] duration metric: took 7.132864ms to wait for apiserver health ...
	I1108 00:00:35.089805   41128 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:00:35.267721   41128 system_pods.go:59] 6 kube-system pods found
	I1108 00:00:35.267753   41128 system_pods.go:61] "coredns-5dd5756b68-k9sl9" [3362d1b2-8097-4aed-bbd6-a93177532c85] Running
	I1108 00:00:35.267759   41128 system_pods.go:61] "etcd-pause-036330" [bf0a46a2-4df3-48df-a448-4deae4726e48] Running
	I1108 00:00:35.267763   41128 system_pods.go:61] "kube-apiserver-pause-036330" [0568f13b-c68a-4c4a-8b68-a384fd8006b8] Running
	I1108 00:00:35.267767   41128 system_pods.go:61] "kube-controller-manager-pause-036330" [351b550d-060f-44d7-9b98-98b378ceaac9] Running
	I1108 00:00:35.267771   41128 system_pods.go:61] "kube-proxy-cfpsq" [588e1aff-542f-4465-b4fc-d6184da50a28] Running
	I1108 00:00:35.267774   41128 system_pods.go:61] "kube-scheduler-pause-036330" [42727091-3480-42c6-adf6-8f9fe39f3f3d] Running
	I1108 00:00:35.267780   41128 system_pods.go:74] duration metric: took 177.96941ms to wait for pod list to return data ...
	I1108 00:00:35.267788   41128 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:00:35.465036   41128 default_sa.go:45] found service account: "default"
	I1108 00:00:35.465064   41128 default_sa.go:55] duration metric: took 197.267871ms for default service account to be created ...
	I1108 00:00:35.465072   41128 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:00:35.668162   41128 system_pods.go:86] 6 kube-system pods found
	I1108 00:00:35.668189   41128 system_pods.go:89] "coredns-5dd5756b68-k9sl9" [3362d1b2-8097-4aed-bbd6-a93177532c85] Running
	I1108 00:00:35.668197   41128 system_pods.go:89] "etcd-pause-036330" [bf0a46a2-4df3-48df-a448-4deae4726e48] Running
	I1108 00:00:35.668202   41128 system_pods.go:89] "kube-apiserver-pause-036330" [0568f13b-c68a-4c4a-8b68-a384fd8006b8] Running
	I1108 00:00:35.668207   41128 system_pods.go:89] "kube-controller-manager-pause-036330" [351b550d-060f-44d7-9b98-98b378ceaac9] Running
	I1108 00:00:35.668212   41128 system_pods.go:89] "kube-proxy-cfpsq" [588e1aff-542f-4465-b4fc-d6184da50a28] Running
	I1108 00:00:35.668217   41128 system_pods.go:89] "kube-scheduler-pause-036330" [42727091-3480-42c6-adf6-8f9fe39f3f3d] Running
	I1108 00:00:35.668225   41128 system_pods.go:126] duration metric: took 203.148416ms to wait for k8s-apps to be running ...
	I1108 00:00:35.668234   41128 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:00:35.668277   41128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:00:35.683012   41128 system_svc.go:56] duration metric: took 14.770591ms WaitForService to wait for kubelet.
	I1108 00:00:35.683038   41128 kubeadm.go:581] duration metric: took 2.978176114s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:00:35.683059   41128 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:00:35.866013   41128 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:00:35.866049   41128 node_conditions.go:123] node cpu capacity is 2
	I1108 00:00:35.866064   41128 node_conditions.go:105] duration metric: took 182.999357ms to run NodePressure ...
	I1108 00:00:35.866079   41128 start.go:228] waiting for startup goroutines ...
	I1108 00:00:35.866094   41128 start.go:233] waiting for cluster config update ...
	I1108 00:00:35.866104   41128 start.go:242] writing updated cluster config ...
	I1108 00:00:35.866492   41128 ssh_runner.go:195] Run: rm -f paused
	I1108 00:00:35.926801   41128 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:00:35.929084   41128 out.go:177] * Done! kubectl is now configured to use "pause-036330" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-07 23:58:09 UTC, ends at Wed 2023-11-08 00:00:36 UTC. --
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.639661667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401636639646813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=b9244697-d03a-4da9-b187-00ce45fbe5a7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.640487737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1ee35b74-fa32-4045-99d4-fec1bdc2216a name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.640620963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1ee35b74-fa32-4045-99d4-fec1bdc2216a name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.640944884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699401618503757888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2,PodSandboxId:e2516d488721e4ce70b77b42b1b5843bc810b845fd5ae063d80f76e82f2865f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699401611869171830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ff3041dafe
4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699401611900527441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c023
0031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e,PodSandboxId:79b7d62661620af23b510c2fd89d2f3efc39c7486370d93577600db862936d4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699401611922653279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540,PodSandboxId:7443dc923609d058dc48bf9e065ca9370de1658a1db9dccc3498d60c4850ce01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699401611856424144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564c49d2b05d84df711e346
c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb,PodSandboxId:c6340d15937badcaf035767a7703d93f90689c6253f27bab7d3e446cc771302b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699401605801485436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 231d1420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699401594969298090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container
.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_EXITED,CreatedAt:1699401594587624859,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c0230031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17,PodSandboxId:b14b72522bcc8e6e53f529c44aa66a1e032f0082d785e4b55b047f57bd307c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699401590422299064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 44ff3041dafe4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0,PodSandboxId:70a91342384b60c8444eb9ad6acf5d9417aad6c311f161f2f31680717e55ee1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699401590685742127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7564c49d2b05d84df711e346c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a,PodSandboxId:8e3f779c871c2cb19806fdf8f4a17b424c1f9f0b43a543fb7745c4e9a93d73de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1699401589756716067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotations:
map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f,PodSandboxId:e972dd03204c7d123371d66e069a6435dd251c55e9ad824d0eb84975d6c55215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1699401541569663154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]string{io.kubernetes.container.hash: 231d1420,io.kubernetes
.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1ee35b74-fa32-4045-99d4-fec1bdc2216a name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.687871871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=51ab1f0d-725a-40d3-95ae-d528e47c2161 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.688002748Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=51ab1f0d-725a-40d3-95ae-d528e47c2161 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.688875396Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e6391c43-85f3-4977-abf8-33038c7bf930 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.689226345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401636689213552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=e6391c43-85f3-4977-abf8-33038c7bf930 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.689759846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7236850f-f087-4bfa-8b4c-7ee541a9137c name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.689808457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7236850f-f087-4bfa-8b4c-7ee541a9137c name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.690073733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699401618503757888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2,PodSandboxId:e2516d488721e4ce70b77b42b1b5843bc810b845fd5ae063d80f76e82f2865f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699401611869171830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ff3041dafe
4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699401611900527441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c023
0031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e,PodSandboxId:79b7d62661620af23b510c2fd89d2f3efc39c7486370d93577600db862936d4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699401611922653279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540,PodSandboxId:7443dc923609d058dc48bf9e065ca9370de1658a1db9dccc3498d60c4850ce01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699401611856424144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564c49d2b05d84df711e346
c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb,PodSandboxId:c6340d15937badcaf035767a7703d93f90689c6253f27bab7d3e446cc771302b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699401605801485436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 231d1420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699401594969298090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container
.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_EXITED,CreatedAt:1699401594587624859,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c0230031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17,PodSandboxId:b14b72522bcc8e6e53f529c44aa66a1e032f0082d785e4b55b047f57bd307c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699401590422299064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 44ff3041dafe4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0,PodSandboxId:70a91342384b60c8444eb9ad6acf5d9417aad6c311f161f2f31680717e55ee1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699401590685742127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7564c49d2b05d84df711e346c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a,PodSandboxId:8e3f779c871c2cb19806fdf8f4a17b424c1f9f0b43a543fb7745c4e9a93d73de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1699401589756716067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotations:
map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f,PodSandboxId:e972dd03204c7d123371d66e069a6435dd251c55e9ad824d0eb84975d6c55215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1699401541569663154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]string{io.kubernetes.container.hash: 231d1420,io.kubernetes
.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7236850f-f087-4bfa-8b4c-7ee541a9137c name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.737774053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e9c31721-3b52-49af-8989-29636a811efe name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.737831736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e9c31721-3b52-49af-8989-29636a811efe name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.738740115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b13490a1-7f98-400d-a1fc-cfe996a24116 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.739113017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401636739098250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=b13490a1-7f98-400d-a1fc-cfe996a24116 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.740191501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a655624d-7f16-4dff-9287-4c4ff6f74bcf name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.740242021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a655624d-7f16-4dff-9287-4c4ff6f74bcf name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.740650586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699401618503757888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2,PodSandboxId:e2516d488721e4ce70b77b42b1b5843bc810b845fd5ae063d80f76e82f2865f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699401611869171830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ff3041dafe
4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699401611900527441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c023
0031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e,PodSandboxId:79b7d62661620af23b510c2fd89d2f3efc39c7486370d93577600db862936d4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699401611922653279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540,PodSandboxId:7443dc923609d058dc48bf9e065ca9370de1658a1db9dccc3498d60c4850ce01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699401611856424144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564c49d2b05d84df711e346
c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb,PodSandboxId:c6340d15937badcaf035767a7703d93f90689c6253f27bab7d3e446cc771302b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699401605801485436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 231d1420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699401594969298090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container
.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_EXITED,CreatedAt:1699401594587624859,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c0230031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17,PodSandboxId:b14b72522bcc8e6e53f529c44aa66a1e032f0082d785e4b55b047f57bd307c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699401590422299064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 44ff3041dafe4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0,PodSandboxId:70a91342384b60c8444eb9ad6acf5d9417aad6c311f161f2f31680717e55ee1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699401590685742127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7564c49d2b05d84df711e346c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a,PodSandboxId:8e3f779c871c2cb19806fdf8f4a17b424c1f9f0b43a543fb7745c4e9a93d73de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1699401589756716067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotations:
map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f,PodSandboxId:e972dd03204c7d123371d66e069a6435dd251c55e9ad824d0eb84975d6c55215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1699401541569663154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]string{io.kubernetes.container.hash: 231d1420,io.kubernetes
.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a655624d-7f16-4dff-9287-4c4ff6f74bcf name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.786829309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3e8cc717-fa20-46ba-b40e-c3641ac22cff name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.786891242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3e8cc717-fa20-46ba-b40e-c3641ac22cff name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.788652306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=40747cff-0abf-4e29-9ba5-9419773e2d0c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.789054335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401636789029543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=40747cff-0abf-4e29-9ba5-9419773e2d0c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.789746961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bac6c75b-c830-4fe9-89ee-d3c3b6f1fe60 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.789816931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bac6c75b-c830-4fe9-89ee-d3c3b6f1fe60 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:36 pause-036330 crio[2447]: time="2023-11-08 00:00:36.790164989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699401618503757888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2,PodSandboxId:e2516d488721e4ce70b77b42b1b5843bc810b845fd5ae063d80f76e82f2865f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699401611869171830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ff3041dafe
4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699401611900527441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c023
0031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e,PodSandboxId:79b7d62661620af23b510c2fd89d2f3efc39c7486370d93577600db862936d4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699401611922653279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540,PodSandboxId:7443dc923609d058dc48bf9e065ca9370de1658a1db9dccc3498d60c4850ce01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699401611856424144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564c49d2b05d84df711e346
c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb,PodSandboxId:c6340d15937badcaf035767a7703d93f90689c6253f27bab7d3e446cc771302b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699401605801485436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 231d1420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699401594969298090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container
.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_EXITED,CreatedAt:1699401594587624859,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c0230031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17,PodSandboxId:b14b72522bcc8e6e53f529c44aa66a1e032f0082d785e4b55b047f57bd307c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699401590422299064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 44ff3041dafe4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0,PodSandboxId:70a91342384b60c8444eb9ad6acf5d9417aad6c311f161f2f31680717e55ee1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699401590685742127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7564c49d2b05d84df711e346c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a,PodSandboxId:8e3f779c871c2cb19806fdf8f4a17b424c1f9f0b43a543fb7745c4e9a93d73de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1699401589756716067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotations:
map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f,PodSandboxId:e972dd03204c7d123371d66e069a6435dd251c55e9ad824d0eb84975d6c55215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1699401541569663154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]string{io.kubernetes.container.hash: 231d1420,io.kubernetes
.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bac6c75b-c830-4fe9-89ee-d3c3b6f1fe60 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f40d5f8ce73a4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago       Running             coredns                   2                   0dfa73a60a494       coredns-5dd5756b68-k9sl9
	5fd98a636db4d       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   24 seconds ago       Running             kube-apiserver            2                   79b7d62661620       kube-apiserver-pause-036330
	6b64418769a4c       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   24 seconds ago       Running             kube-scheduler            2                   52b1d6b035012       kube-scheduler-pause-036330
	c61e946f9e193       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago       Running             etcd                      2                   e2516d488721e       etcd-pause-036330
	8568358cb5fe9       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   25 seconds ago       Running             kube-controller-manager   2                   7443dc923609d       kube-controller-manager-pause-036330
	0cdce1363460b       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   31 seconds ago       Running             kube-proxy                1                   c6340d15937ba       kube-proxy-cfpsq
	be15dbb404987       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   41 seconds ago       Exited              coredns                   1                   0dfa73a60a494       coredns-5dd5756b68-k9sl9
	f25b64573b471       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   42 seconds ago       Exited              kube-scheduler            1                   52b1d6b035012       kube-scheduler-pause-036330
	413b862783c5e       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   46 seconds ago       Exited              kube-controller-manager   1                   70a91342384b6       kube-controller-manager-pause-036330
	442d5c77c1fdd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   46 seconds ago       Exited              etcd                      1                   b14b72522bcc8       etcd-pause-036330
	e083a77f12e1c       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   47 seconds ago       Exited              kube-apiserver            1                   8e3f779c871c2       kube-apiserver-pause-036330
	21950f01f9289       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   About a minute ago   Exited              kube-proxy                0                   e972dd03204c7       kube-proxy-cfpsq
	
	* 
	* ==> coredns [be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51392 - 13756 "HINFO IN 8951741127895334174.4278495589966544284. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013559422s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41081 - 64728 "HINFO IN 235926611506889076.1863088964529852031. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024389615s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-036330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-036330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=pause-036330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_58_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:58:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-036330
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 00:00:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:00:17 +0000   Tue, 07 Nov 2023 23:58:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:00:17 +0000   Tue, 07 Nov 2023 23:58:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:00:17 +0000   Tue, 07 Nov 2023 23:58:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:00:17 +0000   Tue, 07 Nov 2023 23:58:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    pause-036330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e341b7e4a2f4a6da1ec09eaea64e612
	  System UUID:                0e341b7e-4a2f-4a6d-a1ec-09eaea64e612
	  Boot ID:                    62bd37ee-47bd-406b-8028-c687da144514
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-k9sl9                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     98s
	  kube-system                 etcd-pause-036330                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         112s
	  kube-system                 kube-apiserver-pause-036330             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-controller-manager-pause-036330    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-proxy-cfpsq                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-scheduler-pause-036330             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 94s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeAllocatableEnforced  112s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node pause-036330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node pause-036330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node pause-036330 status is now: NodeHasSufficientPID
	  Normal  NodeReady                112s               kubelet          Node pause-036330 status is now: NodeReady
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           101s               node-controller  Node pause-036330 event: Registered Node pause-036330 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-036330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-036330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-036330 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-036330 event: Registered Node pause-036330 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065075] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.409114] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.856915] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149880] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.068873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.249825] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.120866] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.163839] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.120224] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.260374] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +9.051624] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +8.751755] systemd-fstab-generator[1259]: Ignoring "noauto" for root device
	[Nov 7 23:59] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.382936] systemd-fstab-generator[2192]: Ignoring "noauto" for root device
	[  +0.346585] systemd-fstab-generator[2221]: Ignoring "noauto" for root device
	[  +0.310157] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[  +0.242672] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +0.594333] systemd-fstab-generator[2349]: Ignoring "noauto" for root device
	[Nov 8 00:00] systemd-fstab-generator[3181]: Ignoring "noauto" for root device
	[  +6.816002] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17] <==
	* 
	* 
	* ==> etcd [c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2] <==
	* {"level":"info","ts":"2023-11-08T00:00:14.294373Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2023-11-08T00:00:14.294443Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2023-11-08T00:00:14.294489Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"be6e2cf5fb13c","initial-advertise-peer-urls":["https://192.168.39.61:2380"],"listen-peer-urls":["https://192.168.39.61:2380"],"advertise-client-urls":["https://192.168.39.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-08T00:00:14.294965Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-08T00:00:15.839727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-08T00:00:15.839795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-08T00:00:15.839846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgPreVoteResp from be6e2cf5fb13c at term 2"}
	{"level":"info","ts":"2023-11-08T00:00:15.83986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became candidate at term 3"}
	{"level":"info","ts":"2023-11-08T00:00:15.839866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgVoteResp from be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2023-11-08T00:00:15.839874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became leader at term 3"}
	{"level":"info","ts":"2023-11-08T00:00:15.839881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be6e2cf5fb13c elected leader be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2023-11-08T00:00:15.845245Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"be6e2cf5fb13c","local-member-attributes":"{Name:pause-036330 ClientURLs:[https://192.168.39.61:2379]}","request-path":"/0/members/be6e2cf5fb13c/attributes","cluster-id":"855213fb0218a9ad","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T00:00:15.845445Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:00:15.845464Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:00:15.846779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T00:00:15.847219Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T00:00:15.847277Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T00:00:15.846813Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.61:2379"}
	{"level":"info","ts":"2023-11-08T00:00:30.068709Z","caller":"traceutil/trace.go:171","msg":"trace[794105695] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"136.364047ms","start":"2023-11-08T00:00:29.932319Z","end":"2023-11-08T00:00:30.068683Z","steps":["trace[794105695] 'process raft request'  (duration: 135.644202ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T00:00:30.306912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.938617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-controller\" ","response":"range_response_count:1 size:201"}
	{"level":"info","ts":"2023-11-08T00:00:30.307048Z","caller":"traceutil/trace.go:171","msg":"trace[849905167] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-controller; range_end:; response_count:1; response_revision:447; }","duration":"227.109815ms","start":"2023-11-08T00:00:30.079915Z","end":"2023-11-08T00:00:30.307025Z","steps":["trace[849905167] 'range keys from in-memory index tree'  (duration: 226.834735ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:00:30.307853Z","caller":"traceutil/trace.go:171","msg":"trace[871415036] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:487; }","duration":"118.507917ms","start":"2023-11-08T00:00:30.189321Z","end":"2023-11-08T00:00:30.307829Z","steps":["trace[871415036] 'read index received'  (duration: 118.337561ms)","trace[871415036] 'applied index is now lower than readState.Index'  (duration: 169.798µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-08T00:00:30.308016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.703926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-036330\" ","response":"range_response_count:1 size:6629"}
	{"level":"info","ts":"2023-11-08T00:00:30.308081Z","caller":"traceutil/trace.go:171","msg":"trace[2028021459] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-036330; range_end:; response_count:1; response_revision:448; }","duration":"118.770911ms","start":"2023-11-08T00:00:30.189294Z","end":"2023-11-08T00:00:30.308065Z","steps":["trace[2028021459] 'agreement among raft nodes before linearized reading'  (duration: 118.629737ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:00:30.30841Z","caller":"traceutil/trace.go:171","msg":"trace[1027259908] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"227.575801ms","start":"2023-11-08T00:00:30.080823Z","end":"2023-11-08T00:00:30.308399Z","steps":["trace[1027259908] 'process raft request'  (duration: 226.884454ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  00:00:37 up 2 min,  0 users,  load average: 1.62, 0.76, 0.29
	Linux pause-036330 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e] <==
	* I1108 00:00:17.238078       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1108 00:00:17.238104       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 00:00:17.238258       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 00:00:17.403228       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 00:00:17.427131       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 00:00:17.429711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 00:00:17.435954       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 00:00:17.436007       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 00:00:17.456765       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 00:00:17.456878       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 00:00:17.456931       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 00:00:17.460105       1 aggregator.go:166] initial CRD sync complete...
	I1108 00:00:17.460159       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 00:00:17.460166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 00:00:17.460176       1 cache.go:39] Caches are synced for autoregister controller
	I1108 00:00:17.467014       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1108 00:00:17.486287       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 00:00:18.250494       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 00:00:19.015469       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 00:00:19.027935       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 00:00:19.080385       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1108 00:00:19.112720       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 00:00:19.122102       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 00:00:30.598155       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 00:00:30.624223       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a] <==
	* 
	* 
	* ==> kube-controller-manager [413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0] <==
	* 
	* 
	* ==> kube-controller-manager [8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540] <==
	* I1108 00:00:30.450761       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1108 00:00:30.450844       1 taint_manager.go:211] "Sending events to api server"
	I1108 00:00:30.450777       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1108 00:00:30.451240       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1108 00:00:30.451284       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1108 00:00:30.451444       1 event.go:307] "Event occurred" object="pause-036330" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-036330 event: Registered Node pause-036330 in Controller"
	I1108 00:00:30.454045       1 shared_informer.go:318] Caches are synced for crt configmap
	I1108 00:00:30.462459       1 shared_informer.go:318] Caches are synced for daemon sets
	I1108 00:00:30.464868       1 shared_informer.go:318] Caches are synced for deployment
	I1108 00:00:30.471414       1 shared_informer.go:318] Caches are synced for namespace
	I1108 00:00:30.473071       1 shared_informer.go:318] Caches are synced for PV protection
	I1108 00:00:30.477005       1 shared_informer.go:318] Caches are synced for HPA
	I1108 00:00:30.479327       1 shared_informer.go:318] Caches are synced for GC
	I1108 00:00:30.479919       1 shared_informer.go:318] Caches are synced for stateful set
	I1108 00:00:30.503221       1 shared_informer.go:318] Caches are synced for job
	I1108 00:00:30.509224       1 shared_informer.go:318] Caches are synced for cronjob
	I1108 00:00:30.516190       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1108 00:00:30.535793       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1108 00:00:30.551400       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 00:00:30.557799       1 shared_informer.go:318] Caches are synced for endpoint
	I1108 00:00:30.584823       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 00:00:30.597671       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1108 00:00:30.981146       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 00:00:31.030893       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 00:00:31.030966       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb] <==
	* I1108 00:00:06.035688       1 server_others.go:69] "Using iptables proxy"
	E1108 00:00:06.040230       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-036330": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:07.151431       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-036330": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:09.299823       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-036330": dial tcp 192.168.39.61:8443: connect: connection refused
	I1108 00:00:17.446740       1 node.go:141] Successfully retrieved node IP: 192.168.39.61
	I1108 00:00:17.532884       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 00:00:17.532998       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 00:00:17.536146       1 server_others.go:152] "Using iptables Proxier"
	I1108 00:00:17.536242       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 00:00:17.536409       1 server.go:846] "Version info" version="v1.28.3"
	I1108 00:00:17.536417       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:00:17.537675       1 config.go:188] "Starting service config controller"
	I1108 00:00:17.537724       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 00:00:17.537750       1 config.go:97] "Starting endpoint slice config controller"
	I1108 00:00:17.537753       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 00:00:17.538215       1 config.go:315] "Starting node config controller"
	I1108 00:00:17.538221       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 00:00:17.638983       1 shared_informer.go:318] Caches are synced for node config
	I1108 00:00:17.639035       1 shared_informer.go:318] Caches are synced for service config
	I1108 00:00:17.639057       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f] <==
	* I1107 23:59:02.003253       1 server_others.go:69] "Using iptables proxy"
	I1107 23:59:02.026680       1 node.go:141] Successfully retrieved node IP: 192.168.39.61
	I1107 23:59:02.098666       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1107 23:59:02.098736       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1107 23:59:02.101414       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:59:02.101869       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:59:02.102072       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:59:02.102294       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:59:02.104057       1 config.go:188] "Starting service config controller"
	I1107 23:59:02.104166       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:59:02.104448       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:59:02.104633       1 config.go:315] "Starting node config controller"
	I1107 23:59:02.104665       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:59:02.104854       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:59:02.205816       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1107 23:59:02.206036       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:59:02.206386       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff] <==
	* I1108 00:00:14.091966       1 serving.go:348] Generated self-signed cert in-memory
	W1108 00:00:17.309380       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 00:00:17.309623       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 00:00:17.309636       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 00:00:17.309643       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 00:00:17.400277       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1108 00:00:17.402116       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:00:17.405668       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 00:00:17.405739       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 00:00:17.409665       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1108 00:00:17.409751       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 00:00:17.506828       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542] <==
	* E1108 00:00:04.197975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:04.353883       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.61:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:04.353968       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.61:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:04.722909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.61:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:04.723003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.61:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:04.726825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.61:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:04.726906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.61:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:04.991904       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.61:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:04.991991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.61:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:05.187912       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.61:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:05.188008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.61:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:05.254245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.61:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:05.254310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.61:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.153851       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.153999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.236231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.236373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.342359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.342449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.450509       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.61:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.450710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.61:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.805409       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.61:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.805497       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.61:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:09.743921       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E1108 00:00:09.744843       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-07 23:58:09 UTC, ends at Wed 2023-11-08 00:00:37 UTC. --
	Nov 08 00:00:11 pause-036330 kubelet[3187]: E1108 00:00:11.900671    3187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.61:8443: connect: connection refused" node="pause-036330"
	Nov 08 00:00:12 pause-036330 kubelet[3187]: W1108 00:00:12.064476    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.064624    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: W1108 00:00:12.231119    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.231223    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: W1108 00:00:12.258416    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-036330&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.258530    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-036330&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: W1108 00:00:12.522831    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.522918    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.594132    3187 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-036330?timeout=10s\": dial tcp 192.168.39.61:8443: connect: connection refused" interval="1.6s"
	Nov 08 00:00:12 pause-036330 kubelet[3187]: I1108 00:00:12.702372    3187 kubelet_node_status.go:70] "Attempting to register node" node="pause-036330"
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.702842    3187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.61:8443: connect: connection refused" node="pause-036330"
	Nov 08 00:00:14 pause-036330 kubelet[3187]: I1108 00:00:14.305078    3187 kubelet_node_status.go:70] "Attempting to register node" node="pause-036330"
	Nov 08 00:00:17 pause-036330 kubelet[3187]: I1108 00:00:17.485201    3187 kubelet_node_status.go:108] "Node was previously registered" node="pause-036330"
	Nov 08 00:00:17 pause-036330 kubelet[3187]: I1108 00:00:17.485280    3187 kubelet_node_status.go:73] "Successfully registered node" node="pause-036330"
	Nov 08 00:00:17 pause-036330 kubelet[3187]: I1108 00:00:17.487196    3187 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 00:00:17 pause-036330 kubelet[3187]: I1108 00:00:17.488357    3187 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.171998    3187 apiserver.go:52] "Watching apiserver"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.176033    3187 topology_manager.go:215] "Topology Admit Handler" podUID="588e1aff-542f-4465-b4fc-d6184da50a28" podNamespace="kube-system" podName="kube-proxy-cfpsq"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.176193    3187 topology_manager.go:215] "Topology Admit Handler" podUID="3362d1b2-8097-4aed-bbd6-a93177532c85" podNamespace="kube-system" podName="coredns-5dd5756b68-k9sl9"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.189983    3187 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.257401    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/588e1aff-542f-4465-b4fc-d6184da50a28-xtables-lock\") pod \"kube-proxy-cfpsq\" (UID: \"588e1aff-542f-4465-b4fc-d6184da50a28\") " pod="kube-system/kube-proxy-cfpsq"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.257497    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/588e1aff-542f-4465-b4fc-d6184da50a28-lib-modules\") pod \"kube-proxy-cfpsq\" (UID: \"588e1aff-542f-4465-b4fc-d6184da50a28\") " pod="kube-system/kube-proxy-cfpsq"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.477651    3187 scope.go:117] "RemoveContainer" containerID="be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127"
	Nov 08 00:00:26 pause-036330 kubelet[3187]: I1108 00:00:26.534104    3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-036330 -n pause-036330
helpers_test.go:261: (dbg) Run:  kubectl --context pause-036330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-036330 -n pause-036330
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-036330 logs -n 25
E1108 00:00:38.957154   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-036330 logs -n 25: (1.452554924s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 5m                  |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:56 UTC | 07 Nov 23 23:56 UTC |
	|         | --cancel-scheduled             |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC |                     |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| stop    | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 07 Nov 23 23:57 UTC |
	|         | --schedule 15s                 |                          |         |         |                     |                     |
	| delete  | -p scheduled-stop-153425       | scheduled-stop-153425    | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 07 Nov 23 23:57 UTC |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC |                     |
	|         | --no-kubernetes                |                          |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                          |         |         |                     |                     |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p pause-036330 --memory=2048  | pause-036330             | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 07 Nov 23 23:59 UTC |
	|         | --install-addons=false         |                          |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p offline-crio-711737         | offline-crio-711737      | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 08 Nov 23 00:00 UTC |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                          |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:57 UTC | 07 Nov 23 23:59 UTC |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p pause-036330                | pause-036330             | jenkins | v1.32.0 | 07 Nov 23 23:59 UTC | 08 Nov 23 00:00 UTC |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:59 UTC | 07 Nov 23 23:59 UTC |
	|         | --no-kubernetes --driver=kvm2  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:59 UTC | 07 Nov 23 23:59 UTC |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 07 Nov 23 23:59 UTC | 08 Nov 23 00:00 UTC |
	|         | --no-kubernetes --driver=kvm2  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p offline-crio-711737         | offline-crio-711737      | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC | 08 Nov 23 00:00 UTC |
	| start   | -p force-systemd-env-420594    | force-systemd-env-420594 | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC |                     |
	|         | --memory=2048                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| start   | -p running-upgrade-802871      | running-upgrade-802871   | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC |                     |
	|         | --memory=2200                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| ssh     | -p NoKubernetes-798084 sudo    | NoKubernetes-798084      | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC |                     |
	|         | systemctl is-active --quiet    |                          |         |         |                     |                     |
	|         | service kubelet                |                          |         |         |                     |                     |
	| stop    | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC | 08 Nov 23 00:00 UTC |
	| start   | -p NoKubernetes-798084         | NoKubernetes-798084      | jenkins | v1.32.0 | 08 Nov 23 00:00 UTC |                     |
	|         | --driver=kvm2                  |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:00:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:00:26.160288   42523 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:00:26.160589   42523 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:00:26.160594   42523 out.go:309] Setting ErrFile to fd 2...
	I1108 00:00:26.160600   42523 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:00:26.160920   42523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:00:26.161588   42523 out.go:303] Setting JSON to false
	I1108 00:00:26.162828   42523 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6175,"bootTime":1699395451,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:00:26.162897   42523 start.go:138] virtualization: kvm guest
	I1108 00:00:26.166008   42523 out.go:177] * [NoKubernetes-798084] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:00:26.168134   42523 notify.go:220] Checking for updates...
	I1108 00:00:26.168154   42523 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:00:26.170154   42523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:00:26.172142   42523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:00:26.174073   42523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:00:26.175653   42523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:00:26.177201   42523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:00:26.179630   42523 config.go:182] Loaded profile config "NoKubernetes-798084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1108 00:00:26.180256   42523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:00:26.180310   42523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:00:26.195570   42523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46219
	I1108 00:00:26.196004   42523 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:00:26.196536   42523 main.go:141] libmachine: Using API Version  1
	I1108 00:00:26.196554   42523 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:00:26.197066   42523 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:00:26.197255   42523 main.go:141] libmachine: (NoKubernetes-798084) Calling .DriverName
	I1108 00:00:26.197490   42523 start.go:1772] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1108 00:00:26.197507   42523 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:00:26.197878   42523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:00:26.197916   42523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:00:26.212942   42523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35607
	I1108 00:00:26.213397   42523 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:00:26.213949   42523 main.go:141] libmachine: Using API Version  1
	I1108 00:00:26.213974   42523 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:00:26.214338   42523 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:00:26.214541   42523 main.go:141] libmachine: (NoKubernetes-798084) Calling .DriverName
	I1108 00:00:26.253208   42523 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:00:26.255589   42523 start.go:298] selected driver: kvm2
	I1108 00:00:26.255599   42523 start.go:902] validating driver "kvm2" against &{Name:NoKubernetes-798084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-
798084 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.249 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:00:26.255741   42523 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:00:26.256209   42523 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:26.256290   42523 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:00:26.271959   42523 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:00:26.273070   42523 cni.go:84] Creating CNI manager for ""
	I1108 00:00:26.273084   42523 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:00:26.273096   42523 start_flags.go:323] config:
	{Name:NoKubernetes-798084 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-798084 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.249 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:00:26.273292   42523 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:00:26.276033   42523 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-798084
	I1108 00:00:23.170268   41128 pod_ready.go:102] pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace has status "Ready":"False"
	I1108 00:00:25.171089   41128 pod_ready.go:102] pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace has status "Ready":"False"
	I1108 00:00:26.670335   41128 pod_ready.go:92] pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:26.670361   41128 pod_ready.go:81] duration metric: took 7.523425952s waiting for pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:26.670374   41128 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:26.677889   41128 pod_ready.go:92] pod "etcd-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:26.677918   41128 pod_ready.go:81] duration metric: took 7.535478ms waiting for pod "etcd-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:26.677932   41128 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:23.970923   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:23.971364   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:23.971395   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:23.971312   41789 retry.go:31] will retry after 592.949921ms: waiting for machine to come up
	I1108 00:00:24.567677   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:24.568207   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:24.568233   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:24.568128   41789 retry.go:31] will retry after 594.84646ms: waiting for machine to come up
	I1108 00:00:25.165040   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:25.165650   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:25.165678   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:25.165557   41789 retry.go:31] will retry after 674.335799ms: waiting for machine to come up
	I1108 00:00:25.841236   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:25.841577   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:25.841611   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:25.841549   41789 retry.go:31] will retry after 765.193878ms: waiting for machine to come up
	I1108 00:00:26.607964   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:26.608361   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:26.608383   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:26.608317   41789 retry.go:31] will retry after 947.459789ms: waiting for machine to come up
	I1108 00:00:27.557841   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:27.558347   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:27.558379   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:27.558297   41789 retry.go:31] will retry after 1.727534466s: waiting for machine to come up
	I1108 00:00:26.277517   42523 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1108 00:00:26.474419   42523 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1108 00:00:26.474553   42523 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/NoKubernetes-798084/config.json ...
	I1108 00:00:26.474771   42523 start.go:365] acquiring machines lock for NoKubernetes-798084: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:00:28.700486   41128 pod_ready.go:102] pod "kube-apiserver-pause-036330" in "kube-system" namespace has status "Ready":"False"
	I1108 00:00:30.321423   41128 pod_ready.go:92] pod "kube-apiserver-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:30.321447   41128 pod_ready.go:81] duration metric: took 3.643506993s waiting for pod "kube-apiserver-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.321457   41128 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.333145   41128 pod_ready.go:92] pod "kube-controller-manager-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:30.333177   41128 pod_ready.go:81] duration metric: took 11.712574ms waiting for pod "kube-controller-manager-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.333191   41128 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cfpsq" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.348801   41128 pod_ready.go:92] pod "kube-proxy-cfpsq" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:30.348845   41128 pod_ready.go:81] duration metric: took 15.644603ms waiting for pod "kube-proxy-cfpsq" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:30.348860   41128 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:32.682262   41128 pod_ready.go:92] pod "kube-scheduler-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:32.682286   41128 pod_ready.go:81] duration metric: took 2.33341915s waiting for pod "kube-scheduler-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:32.682297   41128 pod_ready.go:38] duration metric: took 13.540703313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:00:32.682325   41128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:00:32.696095   41128 ops.go:34] apiserver oom_adj: -16
	I1108 00:00:32.696117   41128 kubeadm.go:640] restartCluster took 38.167523728s
	I1108 00:00:32.696127   41128 kubeadm.go:406] StartCluster complete in 38.391355453s
	I1108 00:00:32.696149   41128 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:00:32.696234   41128 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:00:32.697133   41128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:00:32.697423   41128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:00:32.697521   41128 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:00:32.699450   41128 out.go:177] * Enabled addons: 
	I1108 00:00:32.697708   41128 config.go:182] Loaded profile config "pause-036330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:00:32.698036   41128 kapi.go:59] client config for pause-036330: &rest.Config{Host:"https://192.168.39.61:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/profiles/pause-036330/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 00:00:32.701051   41128 addons.go:502] enable addons completed in 3.529312ms: enabled=[]
	I1108 00:00:32.704781   41128 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-036330" context rescaled to 1 replicas
	I1108 00:00:32.704833   41128 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:00:32.706471   41128 out.go:177] * Verifying Kubernetes components...
	I1108 00:00:29.287443   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:29.287887   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:29.287908   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:29.287860   41789 retry.go:31] will retry after 1.803959238s: waiting for machine to come up
	I1108 00:00:31.093432   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:31.093832   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:31.093865   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:31.093790   41789 retry.go:31] will retry after 2.3181566s: waiting for machine to come up
	I1108 00:00:33.414142   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | domain force-systemd-env-420594 has defined MAC address 52:54:00:2a:3f:cf in network mk-force-systemd-env-420594
	I1108 00:00:33.414588   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | unable to find current IP address of domain force-systemd-env-420594 in network mk-force-systemd-env-420594
	I1108 00:00:33.414620   41660 main.go:141] libmachine: (force-systemd-env-420594) DBG | I1108 00:00:33.414537   41789 retry.go:31] will retry after 3.40201263s: waiting for machine to come up
	I1108 00:00:32.707892   41128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:00:32.809123   41128 node_ready.go:35] waiting up to 6m0s for node "pause-036330" to be "Ready" ...
	I1108 00:00:32.809178   41128 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1108 00:00:32.814445   41128 node_ready.go:49] node "pause-036330" has status "Ready":"True"
	I1108 00:00:32.814465   41128 node_ready.go:38] duration metric: took 5.30867ms waiting for node "pause-036330" to be "Ready" ...
	I1108 00:00:32.814473   41128 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:00:32.825365   41128 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.065302   41128 pod_ready.go:92] pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:33.065345   41128 pod_ready.go:81] duration metric: took 239.956744ms waiting for pod "coredns-5dd5756b68-k9sl9" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.065360   41128 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.468246   41128 pod_ready.go:92] pod "etcd-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:33.468269   41128 pod_ready.go:81] duration metric: took 402.900635ms waiting for pod "etcd-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.468282   41128 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.865242   41128 pod_ready.go:92] pod "kube-apiserver-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:33.865269   41128 pod_ready.go:81] duration metric: took 396.97924ms waiting for pod "kube-apiserver-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:33.865279   41128 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:34.264684   41128 pod_ready.go:92] pod "kube-controller-manager-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:34.264712   41128 pod_ready.go:81] duration metric: took 399.425727ms waiting for pod "kube-controller-manager-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:34.264731   41128 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cfpsq" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:34.667374   41128 pod_ready.go:92] pod "kube-proxy-cfpsq" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:34.667402   41128 pod_ready.go:81] duration metric: took 402.661248ms waiting for pod "kube-proxy-cfpsq" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:34.667414   41128 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:35.065198   41128 pod_ready.go:92] pod "kube-scheduler-pause-036330" in "kube-system" namespace has status "Ready":"True"
	I1108 00:00:35.065220   41128 pod_ready.go:81] duration metric: took 397.799123ms waiting for pod "kube-scheduler-pause-036330" in "kube-system" namespace to be "Ready" ...
	I1108 00:00:35.065229   41128 pod_ready.go:38] duration metric: took 2.250746367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:00:35.065243   41128 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:00:35.065286   41128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:00:35.082635   41128 api_server.go:72] duration metric: took 2.377771301s to wait for apiserver process to appear ...
	I1108 00:00:35.082656   41128 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:00:35.082671   41128 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I1108 00:00:35.088478   41128 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I1108 00:00:35.089772   41128 api_server.go:141] control plane version: v1.28.3
	I1108 00:00:35.089795   41128 api_server.go:131] duration metric: took 7.132864ms to wait for apiserver health ...
	I1108 00:00:35.089805   41128 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:00:35.267721   41128 system_pods.go:59] 6 kube-system pods found
	I1108 00:00:35.267753   41128 system_pods.go:61] "coredns-5dd5756b68-k9sl9" [3362d1b2-8097-4aed-bbd6-a93177532c85] Running
	I1108 00:00:35.267759   41128 system_pods.go:61] "etcd-pause-036330" [bf0a46a2-4df3-48df-a448-4deae4726e48] Running
	I1108 00:00:35.267763   41128 system_pods.go:61] "kube-apiserver-pause-036330" [0568f13b-c68a-4c4a-8b68-a384fd8006b8] Running
	I1108 00:00:35.267767   41128 system_pods.go:61] "kube-controller-manager-pause-036330" [351b550d-060f-44d7-9b98-98b378ceaac9] Running
	I1108 00:00:35.267771   41128 system_pods.go:61] "kube-proxy-cfpsq" [588e1aff-542f-4465-b4fc-d6184da50a28] Running
	I1108 00:00:35.267774   41128 system_pods.go:61] "kube-scheduler-pause-036330" [42727091-3480-42c6-adf6-8f9fe39f3f3d] Running
	I1108 00:00:35.267780   41128 system_pods.go:74] duration metric: took 177.96941ms to wait for pod list to return data ...
	I1108 00:00:35.267788   41128 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:00:35.465036   41128 default_sa.go:45] found service account: "default"
	I1108 00:00:35.465064   41128 default_sa.go:55] duration metric: took 197.267871ms for default service account to be created ...
	I1108 00:00:35.465072   41128 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:00:35.668162   41128 system_pods.go:86] 6 kube-system pods found
	I1108 00:00:35.668189   41128 system_pods.go:89] "coredns-5dd5756b68-k9sl9" [3362d1b2-8097-4aed-bbd6-a93177532c85] Running
	I1108 00:00:35.668197   41128 system_pods.go:89] "etcd-pause-036330" [bf0a46a2-4df3-48df-a448-4deae4726e48] Running
	I1108 00:00:35.668202   41128 system_pods.go:89] "kube-apiserver-pause-036330" [0568f13b-c68a-4c4a-8b68-a384fd8006b8] Running
	I1108 00:00:35.668207   41128 system_pods.go:89] "kube-controller-manager-pause-036330" [351b550d-060f-44d7-9b98-98b378ceaac9] Running
	I1108 00:00:35.668212   41128 system_pods.go:89] "kube-proxy-cfpsq" [588e1aff-542f-4465-b4fc-d6184da50a28] Running
	I1108 00:00:35.668217   41128 system_pods.go:89] "kube-scheduler-pause-036330" [42727091-3480-42c6-adf6-8f9fe39f3f3d] Running
	I1108 00:00:35.668225   41128 system_pods.go:126] duration metric: took 203.148416ms to wait for k8s-apps to be running ...
	I1108 00:00:35.668234   41128 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:00:35.668277   41128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:00:35.683012   41128 system_svc.go:56] duration metric: took 14.770591ms WaitForService to wait for kubelet.
	I1108 00:00:35.683038   41128 kubeadm.go:581] duration metric: took 2.978176114s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:00:35.683059   41128 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:00:35.866013   41128 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:00:35.866049   41128 node_conditions.go:123] node cpu capacity is 2
	I1108 00:00:35.866064   41128 node_conditions.go:105] duration metric: took 182.999357ms to run NodePressure ...
	I1108 00:00:35.866079   41128 start.go:228] waiting for startup goroutines ...
	I1108 00:00:35.866094   41128 start.go:233] waiting for cluster config update ...
	I1108 00:00:35.866104   41128 start.go:242] writing updated cluster config ...
	I1108 00:00:35.866492   41128 ssh_runner.go:195] Run: rm -f paused
	I1108 00:00:35.926801   41128 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:00:35.929084   41128 out.go:177] * Done! kubectl is now configured to use "pause-036330" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-07 23:58:09 UTC, ends at Wed 2023-11-08 00:00:38 UTC. --
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.700117158Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401638700092784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=d4a5ba2d-2c11-4704-a0ab-a37b7e861372 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.700768099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=641c47b0-1c4c-4784-8597-a798329c5aad name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.700827622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=641c47b0-1c4c-4784-8597-a798329c5aad name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.701084117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699401618503757888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2,PodSandboxId:e2516d488721e4ce70b77b42b1b5843bc810b845fd5ae063d80f76e82f2865f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699401611869171830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ff3041dafe
4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699401611900527441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c023
0031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e,PodSandboxId:79b7d62661620af23b510c2fd89d2f3efc39c7486370d93577600db862936d4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699401611922653279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540,PodSandboxId:7443dc923609d058dc48bf9e065ca9370de1658a1db9dccc3498d60c4850ce01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699401611856424144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564c49d2b05d84df711e346
c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb,PodSandboxId:c6340d15937badcaf035767a7703d93f90689c6253f27bab7d3e446cc771302b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699401605801485436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 231d1420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699401594969298090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container
.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_EXITED,CreatedAt:1699401594587624859,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c0230031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17,PodSandboxId:b14b72522bcc8e6e53f529c44aa66a1e032f0082d785e4b55b047f57bd307c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699401590422299064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 44ff3041dafe4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0,PodSandboxId:70a91342384b60c8444eb9ad6acf5d9417aad6c311f161f2f31680717e55ee1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699401590685742127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7564c49d2b05d84df711e346c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a,PodSandboxId:8e3f779c871c2cb19806fdf8f4a17b424c1f9f0b43a543fb7745c4e9a93d73de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1699401589756716067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotations:
map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f,PodSandboxId:e972dd03204c7d123371d66e069a6435dd251c55e9ad824d0eb84975d6c55215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1699401541569663154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]string{io.kubernetes.container.hash: 231d1420,io.kubernetes
.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=641c47b0-1c4c-4784-8597-a798329c5aad name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.747476797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0602d122-5943-4428-a05d-97bab7345e3e name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.747591357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0602d122-5943-4428-a05d-97bab7345e3e name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.748814537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4e6bd22f-530e-4911-8fc2-8787f40e0a4a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.749151129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401638749139272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=4e6bd22f-530e-4911-8fc2-8787f40e0a4a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.749712556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=72116f5d-0094-422a-86b4-439d8147abc4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.749755093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=72116f5d-0094-422a-86b4-439d8147abc4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.750139325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699401618503757888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2,PodSandboxId:e2516d488721e4ce70b77b42b1b5843bc810b845fd5ae063d80f76e82f2865f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699401611869171830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ff3041dafe
4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699401611900527441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c023
0031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e,PodSandboxId:79b7d62661620af23b510c2fd89d2f3efc39c7486370d93577600db862936d4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699401611922653279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540,PodSandboxId:7443dc923609d058dc48bf9e065ca9370de1658a1db9dccc3498d60c4850ce01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699401611856424144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564c49d2b05d84df711e346
c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb,PodSandboxId:c6340d15937badcaf035767a7703d93f90689c6253f27bab7d3e446cc771302b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699401605801485436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 231d1420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699401594969298090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container
.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_EXITED,CreatedAt:1699401594587624859,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c0230031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17,PodSandboxId:b14b72522bcc8e6e53f529c44aa66a1e032f0082d785e4b55b047f57bd307c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699401590422299064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 44ff3041dafe4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0,PodSandboxId:70a91342384b60c8444eb9ad6acf5d9417aad6c311f161f2f31680717e55ee1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699401590685742127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7564c49d2b05d84df711e346c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a,PodSandboxId:8e3f779c871c2cb19806fdf8f4a17b424c1f9f0b43a543fb7745c4e9a93d73de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1699401589756716067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotations:
map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f,PodSandboxId:e972dd03204c7d123371d66e069a6435dd251c55e9ad824d0eb84975d6c55215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1699401541569663154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]string{io.kubernetes.container.hash: 231d1420,io.kubernetes
.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=72116f5d-0094-422a-86b4-439d8147abc4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.796602559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c33e6c96-0eae-4219-9bc1-efab48201ac5 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.796685801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c33e6c96-0eae-4219-9bc1-efab48201ac5 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.798024130Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cd72606d-7772-4083-bd81-171cd12c4879 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.798361340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401638798346754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=cd72606d-7772-4083-bd81-171cd12c4879 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.798989426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d07fd7d1-44dc-489c-8b81-d33ca585c8fd name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.799038065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d07fd7d1-44dc-489c-8b81-d33ca585c8fd name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.799960795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699401618503757888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2,PodSandboxId:e2516d488721e4ce70b77b42b1b5843bc810b845fd5ae063d80f76e82f2865f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699401611869171830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ff3041dafe
4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699401611900527441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c023
0031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e,PodSandboxId:79b7d62661620af23b510c2fd89d2f3efc39c7486370d93577600db862936d4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699401611922653279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540,PodSandboxId:7443dc923609d058dc48bf9e065ca9370de1658a1db9dccc3498d60c4850ce01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699401611856424144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564c49d2b05d84df711e346
c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb,PodSandboxId:c6340d15937badcaf035767a7703d93f90689c6253f27bab7d3e446cc771302b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699401605801485436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 231d1420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699401594969298090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container
.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_EXITED,CreatedAt:1699401594587624859,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c0230031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17,PodSandboxId:b14b72522bcc8e6e53f529c44aa66a1e032f0082d785e4b55b047f57bd307c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699401590422299064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 44ff3041dafe4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0,PodSandboxId:70a91342384b60c8444eb9ad6acf5d9417aad6c311f161f2f31680717e55ee1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699401590685742127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7564c49d2b05d84df711e346c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a,PodSandboxId:8e3f779c871c2cb19806fdf8f4a17b424c1f9f0b43a543fb7745c4e9a93d73de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1699401589756716067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotations:
map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f,PodSandboxId:e972dd03204c7d123371d66e069a6435dd251c55e9ad824d0eb84975d6c55215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1699401541569663154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]string{io.kubernetes.container.hash: 231d1420,io.kubernetes
.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d07fd7d1-44dc-489c-8b81-d33ca585c8fd name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.849134522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4988ad4d-1a4a-461a-856b-0f2f82842597 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.849193091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4988ad4d-1a4a-461a-856b-0f2f82842597 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.851004477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3cec5052-8be6-4e75-a541-a62fa3d7282b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.851365750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699401638851351646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=3cec5052-8be6-4e75-a541-a62fa3d7282b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.852080722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e420c5cd-dbe4-4b5d-af04-8b9580f35231 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.852128142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e420c5cd-dbe4-4b5d-af04-8b9580f35231 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:00:38 pause-036330 crio[2447]: time="2023-11-08 00:00:38.852419741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699401618503757888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2,PodSandboxId:e2516d488721e4ce70b77b42b1b5843bc810b845fd5ae063d80f76e82f2865f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699401611869171830,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ff3041dafe
4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699401611900527441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c023
0031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e,PodSandboxId:79b7d62661620af23b510c2fd89d2f3efc39c7486370d93577600db862936d4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699401611922653279,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540,PodSandboxId:7443dc923609d058dc48bf9e065ca9370de1658a1db9dccc3498d60c4850ce01,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699401611856424144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564c49d2b05d84df711e346
c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb,PodSandboxId:c6340d15937badcaf035767a7703d93f90689c6253f27bab7d3e446cc771302b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699401605801485436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 231d1420,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127,PodSandboxId:0dfa73a60a494bdbf64ccd4fc9bbb2eed546e6a0b700ee565eefd139fe43f0b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1699401594969298090,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k9sl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3362d1b2-8097-4aed-bbd6-a93177532c85,},Annotations:map[string]string{io.kubernetes.container
.hash: 1e7f170f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542,PodSandboxId:52b1d6b035012634948c7c691361c6b9dfb4310dca5377b1131f9b8c4154f8f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_EXITED,CreatedAt:1699401594587624859,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12e59559b1117bcb3a35bdb8c0230031,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17,PodSandboxId:b14b72522bcc8e6e53f529c44aa66a1e032f0082d785e4b55b047f57bd307c4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1699401590422299064,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-036330,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 44ff3041dafe4d9005156a66947fd07c,},Annotations:map[string]string{io.kubernetes.container.hash: d4f000fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0,PodSandboxId:70a91342384b60c8444eb9ad6acf5d9417aad6c311f161f2f31680717e55ee1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,State:CONTAINER_EXITED,CreatedAt:1699401590685742127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 7564c49d2b05d84df711e346c281f90c,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a,PodSandboxId:8e3f779c871c2cb19806fdf8f4a17b424c1f9f0b43a543fb7745c4e9a93d73de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,State:CONTAINER_EXITED,CreatedAt:1699401589756716067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-036330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13b62ce9a74062c222190d8e23b7a456,},Annotations:
map[string]string{io.kubernetes.container.hash: 87461a69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f,PodSandboxId:e972dd03204c7d123371d66e069a6435dd251c55e9ad824d0eb84975d6c55215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,State:CONTAINER_EXITED,CreatedAt:1699401541569663154,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cfpsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 588e1aff-542f-4465-b4fc-d6184da50a28,},Annotations:map[string]string{io.kubernetes.container.hash: 231d1420,io.kubernetes
.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e420c5cd-dbe4-4b5d-af04-8b9580f35231 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f40d5f8ce73a4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   20 seconds ago       Running             coredns                   2                   0dfa73a60a494       coredns-5dd5756b68-k9sl9
	5fd98a636db4d       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   27 seconds ago       Running             kube-apiserver            2                   79b7d62661620       kube-apiserver-pause-036330
	6b64418769a4c       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   27 seconds ago       Running             kube-scheduler            2                   52b1d6b035012       kube-scheduler-pause-036330
	c61e946f9e193       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   27 seconds ago       Running             etcd                      2                   e2516d488721e       etcd-pause-036330
	8568358cb5fe9       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   27 seconds ago       Running             kube-controller-manager   2                   7443dc923609d       kube-controller-manager-pause-036330
	0cdce1363460b       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   33 seconds ago       Running             kube-proxy                1                   c6340d15937ba       kube-proxy-cfpsq
	be15dbb404987       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   43 seconds ago       Exited              coredns                   1                   0dfa73a60a494       coredns-5dd5756b68-k9sl9
	f25b64573b471       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   44 seconds ago       Exited              kube-scheduler            1                   52b1d6b035012       kube-scheduler-pause-036330
	413b862783c5e       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   48 seconds ago       Exited              kube-controller-manager   1                   70a91342384b6       kube-controller-manager-pause-036330
	442d5c77c1fdd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   48 seconds ago       Exited              etcd                      1                   b14b72522bcc8       etcd-pause-036330
	e083a77f12e1c       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   49 seconds ago       Exited              kube-apiserver            1                   8e3f779c871c2       kube-apiserver-pause-036330
	21950f01f9289       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   About a minute ago   Exited              kube-proxy                0                   e972dd03204c7       kube-proxy-cfpsq
	
	* 
	* ==> coredns [be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51392 - 13756 "HINFO IN 8951741127895334174.4278495589966544284. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013559422s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [f40d5f8ce73a44634f651f67452a614f1f170a4604127bad9b015e24877c8c3b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41081 - 64728 "HINFO IN 235926611506889076.1863088964529852031. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024389615s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-036330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-036330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=pause-036330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_58_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:58:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-036330
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 00:00:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:00:17 +0000   Tue, 07 Nov 2023 23:58:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:00:17 +0000   Tue, 07 Nov 2023 23:58:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:00:17 +0000   Tue, 07 Nov 2023 23:58:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:00:17 +0000   Tue, 07 Nov 2023 23:58:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    pause-036330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e341b7e4a2f4a6da1ec09eaea64e612
	  System UUID:                0e341b7e-4a2f-4a6d-a1ec-09eaea64e612
	  Boot ID:                    62bd37ee-47bd-406b-8028-c687da144514
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-k9sl9                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     100s
	  kube-system                 etcd-pause-036330                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         114s
	  kube-system                 kube-apiserver-pause-036330             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-controller-manager-pause-036330    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-proxy-cfpsq                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-scheduler-pause-036330             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 97s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeAllocatableEnforced  114s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node pause-036330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node pause-036330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node pause-036330 status is now: NodeHasSufficientPID
	  Normal  NodeReady                114s               kubelet          Node pause-036330 status is now: NodeReady
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node pause-036330 event: Registered Node pause-036330 in Controller
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)  kubelet          Node pause-036330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)  kubelet          Node pause-036330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)  kubelet          Node pause-036330 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-036330 event: Registered Node pause-036330 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065075] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.409114] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.856915] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149880] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.068873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.249825] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.120866] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.163839] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.120224] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.260374] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +9.051624] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +8.751755] systemd-fstab-generator[1259]: Ignoring "noauto" for root device
	[Nov 7 23:59] kauditd_printk_skb: 21 callbacks suppressed
	[  +9.382936] systemd-fstab-generator[2192]: Ignoring "noauto" for root device
	[  +0.346585] systemd-fstab-generator[2221]: Ignoring "noauto" for root device
	[  +0.310157] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[  +0.242672] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +0.594333] systemd-fstab-generator[2349]: Ignoring "noauto" for root device
	[Nov 8 00:00] systemd-fstab-generator[3181]: Ignoring "noauto" for root device
	[  +6.816002] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [442d5c77c1fdd41bc4e331405b997f9a73b3511a75a6217fdcd5cf6d27390f17] <==
	* 
	* 
	* ==> etcd [c61e946f9e193f32ef80c8f891749abb38ff8eb408686ec5720d342cb8e43ff2] <==
	* {"level":"info","ts":"2023-11-08T00:00:14.294373Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2023-11-08T00:00:14.294443Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2023-11-08T00:00:14.294489Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"be6e2cf5fb13c","initial-advertise-peer-urls":["https://192.168.39.61:2380"],"listen-peer-urls":["https://192.168.39.61:2380"],"advertise-client-urls":["https://192.168.39.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-08T00:00:14.294965Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-08T00:00:15.839727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-08T00:00:15.839795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-08T00:00:15.839846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgPreVoteResp from be6e2cf5fb13c at term 2"}
	{"level":"info","ts":"2023-11-08T00:00:15.83986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became candidate at term 3"}
	{"level":"info","ts":"2023-11-08T00:00:15.839866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgVoteResp from be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2023-11-08T00:00:15.839874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became leader at term 3"}
	{"level":"info","ts":"2023-11-08T00:00:15.839881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be6e2cf5fb13c elected leader be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2023-11-08T00:00:15.845245Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"be6e2cf5fb13c","local-member-attributes":"{Name:pause-036330 ClientURLs:[https://192.168.39.61:2379]}","request-path":"/0/members/be6e2cf5fb13c/attributes","cluster-id":"855213fb0218a9ad","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T00:00:15.845445Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:00:15.845464Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:00:15.846779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T00:00:15.847219Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T00:00:15.847277Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T00:00:15.846813Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.61:2379"}
	{"level":"info","ts":"2023-11-08T00:00:30.068709Z","caller":"traceutil/trace.go:171","msg":"trace[794105695] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"136.364047ms","start":"2023-11-08T00:00:29.932319Z","end":"2023-11-08T00:00:30.068683Z","steps":["trace[794105695] 'process raft request'  (duration: 135.644202ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T00:00:30.306912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.938617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-controller\" ","response":"range_response_count:1 size:201"}
	{"level":"info","ts":"2023-11-08T00:00:30.307048Z","caller":"traceutil/trace.go:171","msg":"trace[849905167] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-controller; range_end:; response_count:1; response_revision:447; }","duration":"227.109815ms","start":"2023-11-08T00:00:30.079915Z","end":"2023-11-08T00:00:30.307025Z","steps":["trace[849905167] 'range keys from in-memory index tree'  (duration: 226.834735ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:00:30.307853Z","caller":"traceutil/trace.go:171","msg":"trace[871415036] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:487; }","duration":"118.507917ms","start":"2023-11-08T00:00:30.189321Z","end":"2023-11-08T00:00:30.307829Z","steps":["trace[871415036] 'read index received'  (duration: 118.337561ms)","trace[871415036] 'applied index is now lower than readState.Index'  (duration: 169.798µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-08T00:00:30.308016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.703926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-036330\" ","response":"range_response_count:1 size:6629"}
	{"level":"info","ts":"2023-11-08T00:00:30.308081Z","caller":"traceutil/trace.go:171","msg":"trace[2028021459] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-036330; range_end:; response_count:1; response_revision:448; }","duration":"118.770911ms","start":"2023-11-08T00:00:30.189294Z","end":"2023-11-08T00:00:30.308065Z","steps":["trace[2028021459] 'agreement among raft nodes before linearized reading'  (duration: 118.629737ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:00:30.30841Z","caller":"traceutil/trace.go:171","msg":"trace[1027259908] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"227.575801ms","start":"2023-11-08T00:00:30.080823Z","end":"2023-11-08T00:00:30.308399Z","steps":["trace[1027259908] 'process raft request'  (duration: 226.884454ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  00:00:39 up 2 min,  0 users,  load average: 1.62, 0.76, 0.29
	Linux pause-036330 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5fd98a636db4d95eeca6ce3a068699916169030db5fa6be733832b8f96a0d45e] <==
	* I1108 00:00:17.238078       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1108 00:00:17.238104       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 00:00:17.238258       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 00:00:17.403228       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 00:00:17.427131       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 00:00:17.429711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 00:00:17.435954       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 00:00:17.436007       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 00:00:17.456765       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 00:00:17.456878       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 00:00:17.456931       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 00:00:17.460105       1 aggregator.go:166] initial CRD sync complete...
	I1108 00:00:17.460159       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 00:00:17.460166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 00:00:17.460176       1 cache.go:39] Caches are synced for autoregister controller
	I1108 00:00:17.467014       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1108 00:00:17.486287       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 00:00:18.250494       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 00:00:19.015469       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 00:00:19.027935       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 00:00:19.080385       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1108 00:00:19.112720       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 00:00:19.122102       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 00:00:30.598155       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 00:00:30.624223       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [e083a77f12e1c1cb6b26075c1d64703a6ec41c5b706ba5b9f0f7018e2ff1d65a] <==
	* 
	* 
	* ==> kube-controller-manager [413b862783c5eb27cc45bd92f7318135d1d4ac9a36780ce42690fef1ae56f1a0] <==
	* 
	* 
	* ==> kube-controller-manager [8568358cb5fe9d3eff8b570ef6f715dab6866f8417fc09623e1d038335943540] <==
	* I1108 00:00:30.450761       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1108 00:00:30.450844       1 taint_manager.go:211] "Sending events to api server"
	I1108 00:00:30.450777       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1108 00:00:30.451240       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1108 00:00:30.451284       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1108 00:00:30.451444       1 event.go:307] "Event occurred" object="pause-036330" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-036330 event: Registered Node pause-036330 in Controller"
	I1108 00:00:30.454045       1 shared_informer.go:318] Caches are synced for crt configmap
	I1108 00:00:30.462459       1 shared_informer.go:318] Caches are synced for daemon sets
	I1108 00:00:30.464868       1 shared_informer.go:318] Caches are synced for deployment
	I1108 00:00:30.471414       1 shared_informer.go:318] Caches are synced for namespace
	I1108 00:00:30.473071       1 shared_informer.go:318] Caches are synced for PV protection
	I1108 00:00:30.477005       1 shared_informer.go:318] Caches are synced for HPA
	I1108 00:00:30.479327       1 shared_informer.go:318] Caches are synced for GC
	I1108 00:00:30.479919       1 shared_informer.go:318] Caches are synced for stateful set
	I1108 00:00:30.503221       1 shared_informer.go:318] Caches are synced for job
	I1108 00:00:30.509224       1 shared_informer.go:318] Caches are synced for cronjob
	I1108 00:00:30.516190       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1108 00:00:30.535793       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1108 00:00:30.551400       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 00:00:30.557799       1 shared_informer.go:318] Caches are synced for endpoint
	I1108 00:00:30.584823       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 00:00:30.597671       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1108 00:00:30.981146       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 00:00:31.030893       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 00:00:31.030966       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [0cdce1363460bbac4ae3acf4b7b4a124217ba0a6d982f8c3634e46d5c12511fb] <==
	* I1108 00:00:06.035688       1 server_others.go:69] "Using iptables proxy"
	E1108 00:00:06.040230       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-036330": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:07.151431       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-036330": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:09.299823       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-036330": dial tcp 192.168.39.61:8443: connect: connection refused
	I1108 00:00:17.446740       1 node.go:141] Successfully retrieved node IP: 192.168.39.61
	I1108 00:00:17.532884       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 00:00:17.532998       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 00:00:17.536146       1 server_others.go:152] "Using iptables Proxier"
	I1108 00:00:17.536242       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 00:00:17.536409       1 server.go:846] "Version info" version="v1.28.3"
	I1108 00:00:17.536417       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:00:17.537675       1 config.go:188] "Starting service config controller"
	I1108 00:00:17.537724       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 00:00:17.537750       1 config.go:97] "Starting endpoint slice config controller"
	I1108 00:00:17.537753       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 00:00:17.538215       1 config.go:315] "Starting node config controller"
	I1108 00:00:17.538221       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 00:00:17.638983       1 shared_informer.go:318] Caches are synced for node config
	I1108 00:00:17.639035       1 shared_informer.go:318] Caches are synced for service config
	I1108 00:00:17.639057       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [21950f01f928951543991719c8cb206b68a47d655ee05b312ce93a8fc98df95f] <==
	* I1107 23:59:02.003253       1 server_others.go:69] "Using iptables proxy"
	I1107 23:59:02.026680       1 node.go:141] Successfully retrieved node IP: 192.168.39.61
	I1107 23:59:02.098666       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1107 23:59:02.098736       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1107 23:59:02.101414       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:59:02.101869       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:59:02.102072       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:59:02.102294       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:59:02.104057       1 config.go:188] "Starting service config controller"
	I1107 23:59:02.104166       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:59:02.104448       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:59:02.104633       1 config.go:315] "Starting node config controller"
	I1107 23:59:02.104665       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:59:02.104854       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:59:02.205816       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1107 23:59:02.206036       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:59:02.206386       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6b64418769a4c6b36fc93623d7b4187c09c8ebde822957a37a869d6c6286e2ff] <==
	* I1108 00:00:14.091966       1 serving.go:348] Generated self-signed cert in-memory
	W1108 00:00:17.309380       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 00:00:17.309623       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 00:00:17.309636       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 00:00:17.309643       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 00:00:17.400277       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1108 00:00:17.402116       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:00:17.405668       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 00:00:17.405739       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 00:00:17.409665       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1108 00:00:17.409751       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 00:00:17.506828       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f25b64573b4714ea0285074dfe03ee7adee9595925e50374d5d3cb6444e13542] <==
	* E1108 00:00:04.197975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:04.353883       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.61:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:04.353968       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.61:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:04.722909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.61:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:04.723003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.61:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:04.726825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.61:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:04.726906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.61:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:04.991904       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.61:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:04.991991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.61:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:05.187912       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.61:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:05.188008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.61:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:05.254245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.61:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:05.254310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.61:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.153851       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.153999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.236231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.236373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.342359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.342449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.61:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.450509       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.61:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.450710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.61:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	W1108 00:00:06.805409       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.61:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:06.805497       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.61:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	E1108 00:00:09.743921       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E1108 00:00:09.744843       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-07 23:58:09 UTC, ends at Wed 2023-11-08 00:00:39 UTC. --
	Nov 08 00:00:11 pause-036330 kubelet[3187]: E1108 00:00:11.900671    3187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.61:8443: connect: connection refused" node="pause-036330"
	Nov 08 00:00:12 pause-036330 kubelet[3187]: W1108 00:00:12.064476    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.064624    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: W1108 00:00:12.231119    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.231223    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: W1108 00:00:12.258416    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-036330&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.258530    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-036330&limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: W1108 00:00:12.522831    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.522918    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.61:8443: connect: connection refused
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.594132    3187 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-036330?timeout=10s\": dial tcp 192.168.39.61:8443: connect: connection refused" interval="1.6s"
	Nov 08 00:00:12 pause-036330 kubelet[3187]: I1108 00:00:12.702372    3187 kubelet_node_status.go:70] "Attempting to register node" node="pause-036330"
	Nov 08 00:00:12 pause-036330 kubelet[3187]: E1108 00:00:12.702842    3187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.61:8443: connect: connection refused" node="pause-036330"
	Nov 08 00:00:14 pause-036330 kubelet[3187]: I1108 00:00:14.305078    3187 kubelet_node_status.go:70] "Attempting to register node" node="pause-036330"
	Nov 08 00:00:17 pause-036330 kubelet[3187]: I1108 00:00:17.485201    3187 kubelet_node_status.go:108] "Node was previously registered" node="pause-036330"
	Nov 08 00:00:17 pause-036330 kubelet[3187]: I1108 00:00:17.485280    3187 kubelet_node_status.go:73] "Successfully registered node" node="pause-036330"
	Nov 08 00:00:17 pause-036330 kubelet[3187]: I1108 00:00:17.487196    3187 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 00:00:17 pause-036330 kubelet[3187]: I1108 00:00:17.488357    3187 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.171998    3187 apiserver.go:52] "Watching apiserver"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.176033    3187 topology_manager.go:215] "Topology Admit Handler" podUID="588e1aff-542f-4465-b4fc-d6184da50a28" podNamespace="kube-system" podName="kube-proxy-cfpsq"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.176193    3187 topology_manager.go:215] "Topology Admit Handler" podUID="3362d1b2-8097-4aed-bbd6-a93177532c85" podNamespace="kube-system" podName="coredns-5dd5756b68-k9sl9"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.189983    3187 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.257401    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/588e1aff-542f-4465-b4fc-d6184da50a28-xtables-lock\") pod \"kube-proxy-cfpsq\" (UID: \"588e1aff-542f-4465-b4fc-d6184da50a28\") " pod="kube-system/kube-proxy-cfpsq"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.257497    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/588e1aff-542f-4465-b4fc-d6184da50a28-lib-modules\") pod \"kube-proxy-cfpsq\" (UID: \"588e1aff-542f-4465-b4fc-d6184da50a28\") " pod="kube-system/kube-proxy-cfpsq"
	Nov 08 00:00:18 pause-036330 kubelet[3187]: I1108 00:00:18.477651    3187 scope.go:117] "RemoveContainer" containerID="be15dbb404987f35e6e02b6e7dedf71d91e3a87054cb2c6ee8fafab396254127"
	Nov 08 00:00:26 pause-036330 kubelet[3187]: I1108 00:00:26.534104    3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-036330 -n pause-036330
helpers_test.go:261: (dbg) Run:  kubectl --context pause-036330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (58.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (264.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.2053566179.exe start -p stopped-upgrade-688874 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1108 00:03:53.871535   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.2053566179.exe start -p stopped-upgrade-688874 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m11.761060581s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.2053566179.exe -p stopped-upgrade-688874 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.2053566179.exe -p stopped-upgrade-688874 stop: (1m32.471685091s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-688874 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-688874 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (40.695839148s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-688874] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-688874 in cluster stopped-upgrade-688874
	* Restarting existing kvm2 VM for "stopped-upgrade-688874" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:06:54.610722   49282 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:06:54.610882   49282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:06:54.610893   49282 out.go:309] Setting ErrFile to fd 2...
	I1108 00:06:54.610900   49282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:06:54.611100   49282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:06:54.611641   49282 out.go:303] Setting JSON to false
	I1108 00:06:54.612561   49282 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6564,"bootTime":1699395451,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:06:54.612615   49282 start.go:138] virtualization: kvm guest
	I1108 00:06:54.614869   49282 out.go:177] * [stopped-upgrade-688874] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:06:54.616151   49282 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:06:54.617384   49282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:06:54.616199   49282 notify.go:220] Checking for updates...
	I1108 00:06:54.620086   49282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:06:54.621370   49282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:06:54.622711   49282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:06:54.623878   49282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:06:54.625399   49282 config.go:182] Loaded profile config "stopped-upgrade-688874": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1108 00:06:54.625416   49282 start_flags.go:694] config upgrade: Driver=kvm2
	I1108 00:06:54.625424   49282 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1108 00:06:54.625503   49282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/stopped-upgrade-688874/config.json ...
	I1108 00:06:54.626042   49282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:06:54.626123   49282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:06:54.640093   49282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42709
	I1108 00:06:54.640418   49282 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:06:54.640944   49282 main.go:141] libmachine: Using API Version  1
	I1108 00:06:54.640968   49282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:06:54.641278   49282 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:06:54.641446   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	I1108 00:06:54.643226   49282 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1108 00:06:54.644512   49282 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:06:54.644784   49282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:06:54.644836   49282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:06:54.658691   49282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I1108 00:06:54.659066   49282 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:06:54.659475   49282 main.go:141] libmachine: Using API Version  1
	I1108 00:06:54.659496   49282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:06:54.659773   49282 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:06:54.659940   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	I1108 00:06:54.693950   49282 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:06:54.695251   49282 start.go:298] selected driver: kvm2
	I1108 00:06:54.695264   49282 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-688874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.156 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1108 00:06:54.695353   49282 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:06:54.695971   49282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:54.696045   49282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:06:54.709893   49282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:06:54.710232   49282 cni.go:84] Creating CNI manager for ""
	I1108 00:06:54.710248   49282 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1108 00:06:54.710256   49282 start_flags.go:323] config:
	{Name:stopped-upgrade-688874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.83.156 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1108 00:06:54.710391   49282 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:54.712197   49282 out.go:177] * Starting control plane node stopped-upgrade-688874 in cluster stopped-upgrade-688874
	I1108 00:06:54.713456   49282 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1108 00:06:55.183929   49282 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1108 00:06:55.184068   49282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/stopped-upgrade-688874/config.json ...
	I1108 00:06:55.184208   49282 cache.go:107] acquiring lock: {Name:mkf3ec2311550e530c52ff03f93128f88da2978d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:55.184249   49282 cache.go:107] acquiring lock: {Name:mkdbddaea61325c36bdd1deac0add3efa1ed6c67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:55.184257   49282 cache.go:107] acquiring lock: {Name:mk594dd7549e60e18d8bd4293c5811a96eeb191f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:55.184312   49282 cache.go:115] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1108 00:06:55.184335   49282 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 149.783µs
	I1108 00:06:55.184346   49282 cache.go:115] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1108 00:06:55.184350   49282 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1108 00:06:55.184208   49282 cache.go:107] acquiring lock: {Name:mk8b5344aa14cfab6603c16267abbee9b90b28bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:55.184350   49282 start.go:365] acquiring machines lock for stopped-upgrade-688874: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:06:55.184358   49282 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 111.441µs
	I1108 00:06:55.184368   49282 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1108 00:06:55.184423   49282 cache.go:115] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 00:06:55.184428   49282 start.go:369] acquired machines lock for "stopped-upgrade-688874" in 54.909µs
	I1108 00:06:55.184448   49282 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:06:55.184461   49282 fix.go:54] fixHost starting: minikube
	I1108 00:06:55.184436   49282 cache.go:107] acquiring lock: {Name:mkc8d68bbf6cc058924e812306c43f81f262db6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:55.184490   49282 cache.go:115] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1108 00:06:55.184480   49282 cache.go:107] acquiring lock: {Name:mk5637421e24801b5bf8ad3ca48d00e3f68a1b01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:55.184504   49282 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 256.739µs
	I1108 00:06:55.184514   49282 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1108 00:06:55.184441   49282 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 239.888µs
	I1108 00:06:55.184543   49282 cache.go:115] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1108 00:06:55.184546   49282 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 00:06:55.184551   49282 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 202.796µs
	I1108 00:06:55.184505   49282 cache.go:107] acquiring lock: {Name:mk26d4ab1ebf0433db88303c7e5eca95c08d7379 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:55.184607   49282 cache.go:115] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1108 00:06:55.184621   49282 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 353.65µs
	I1108 00:06:55.184640   49282 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1108 00:06:55.184518   49282 cache.go:107] acquiring lock: {Name:mk7e8e34bf28915c1bfc670e8841f663a70cb5e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:06:55.184578   49282 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1108 00:06:55.184643   49282 cache.go:115] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1108 00:06:55.184689   49282 cache.go:115] /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1108 00:06:55.184694   49282 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 230.031µs
	I1108 00:06:55.184697   49282 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 210.12µs
	I1108 00:06:55.184702   49282 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1108 00:06:55.184706   49282 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1108 00:06:55.184720   49282 cache.go:87] Successfully saved all images to host disk.
	I1108 00:06:55.184876   49282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:06:55.184914   49282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:06:55.198689   49282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I1108 00:06:55.199055   49282 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:06:55.199477   49282 main.go:141] libmachine: Using API Version  1
	I1108 00:06:55.199498   49282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:06:55.199827   49282 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:06:55.199978   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	I1108 00:06:55.200146   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetState
	I1108 00:06:55.201675   49282 fix.go:102] recreateIfNeeded on stopped-upgrade-688874: state=Stopped err=<nil>
	I1108 00:06:55.201712   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	W1108 00:06:55.201862   49282 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:06:55.204381   49282 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-688874" ...
	I1108 00:06:55.205766   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .Start
	I1108 00:06:55.205929   49282 main.go:141] libmachine: (stopped-upgrade-688874) Ensuring networks are active...
	I1108 00:06:55.206688   49282 main.go:141] libmachine: (stopped-upgrade-688874) Ensuring network default is active
	I1108 00:06:55.207027   49282 main.go:141] libmachine: (stopped-upgrade-688874) Ensuring network minikube-net is active
	I1108 00:06:55.207390   49282 main.go:141] libmachine: (stopped-upgrade-688874) Getting domain xml...
	I1108 00:06:55.208113   49282 main.go:141] libmachine: (stopped-upgrade-688874) Creating domain...
	I1108 00:06:56.449985   49282 main.go:141] libmachine: (stopped-upgrade-688874) Waiting to get IP...
	I1108 00:06:56.451081   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:06:56.451544   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:06:56.451634   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:06:56.451545   49316 retry.go:31] will retry after 224.219349ms: waiting for machine to come up
	I1108 00:06:56.676850   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:06:56.677356   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:06:56.677391   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:06:56.677304   49316 retry.go:31] will retry after 375.79659ms: waiting for machine to come up
	I1108 00:06:57.055032   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:06:57.055469   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:06:57.055531   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:06:57.055444   49316 retry.go:31] will retry after 357.084384ms: waiting for machine to come up
	I1108 00:06:57.413797   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:06:57.414244   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:06:57.414269   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:06:57.414191   49316 retry.go:31] will retry after 371.85665ms: waiting for machine to come up
	I1108 00:06:57.787691   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:06:57.788141   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:06:57.788164   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:06:57.788087   49316 retry.go:31] will retry after 707.318059ms: waiting for machine to come up
	I1108 00:06:58.496934   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:06:58.497396   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:06:58.497428   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:06:58.497342   49316 retry.go:31] will retry after 943.971909ms: waiting for machine to come up
	I1108 00:06:59.442311   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:06:59.442808   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:06:59.442834   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:06:59.442768   49316 retry.go:31] will retry after 941.247394ms: waiting for machine to come up
	I1108 00:07:00.385311   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:00.385808   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:07:00.385832   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:07:00.385760   49316 retry.go:31] will retry after 1.277934227s: waiting for machine to come up
	I1108 00:07:01.665141   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:01.665683   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:07:01.665707   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:07:01.665641   49316 retry.go:31] will retry after 1.138498284s: waiting for machine to come up
	I1108 00:07:02.805710   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:02.806161   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:07:02.806189   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:07:02.806116   49316 retry.go:31] will retry after 2.203505757s: waiting for machine to come up
	I1108 00:07:05.011016   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:05.011534   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:07:05.011557   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:07:05.011484   49316 retry.go:31] will retry after 2.537288718s: waiting for machine to come up
	I1108 00:07:07.551315   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:07.551885   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:07:07.551922   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:07:07.551848   49316 retry.go:31] will retry after 3.311141963s: waiting for machine to come up
	I1108 00:07:10.864898   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:10.865347   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:07:10.865368   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:07:10.865313   49316 retry.go:31] will retry after 3.559335422s: waiting for machine to come up
	I1108 00:07:14.425855   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:14.426314   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:07:14.426362   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:07:14.426274   49316 retry.go:31] will retry after 3.474511894s: waiting for machine to come up
	I1108 00:07:17.901839   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:17.902389   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:07:17.902438   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:07:17.902310   49316 retry.go:31] will retry after 6.298377037s: waiting for machine to come up
	I1108 00:07:24.205455   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:24.205847   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | unable to find current IP address of domain stopped-upgrade-688874 in network minikube-net
	I1108 00:07:24.205872   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | I1108 00:07:24.205806   49316 retry.go:31] will retry after 8.219942863s: waiting for machine to come up
	I1108 00:07:32.428058   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.428512   49282 main.go:141] libmachine: (stopped-upgrade-688874) Found IP for machine: 192.168.83.156
	I1108 00:07:32.428534   49282 main.go:141] libmachine: (stopped-upgrade-688874) Reserving static IP address...
	I1108 00:07:32.428562   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has current primary IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.428928   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "stopped-upgrade-688874", mac: "52:54:00:35:d3:74", ip: "192.168.83.156"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:32.428961   49282 main.go:141] libmachine: (stopped-upgrade-688874) Reserved static IP address: 192.168.83.156
	I1108 00:07:32.428974   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-688874", mac: "52:54:00:35:d3:74", ip: "192.168.83.156"}
	I1108 00:07:32.428987   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | Getting to WaitForSSH function...
	I1108 00:07:32.428999   49282 main.go:141] libmachine: (stopped-upgrade-688874) Waiting for SSH to be available...
	I1108 00:07:32.430822   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.431074   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:32.431101   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.431260   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | Using SSH client type: external
	I1108 00:07:32.431279   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/stopped-upgrade-688874/id_rsa (-rw-------)
	I1108 00:07:32.431313   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/stopped-upgrade-688874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:07:32.431327   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | About to run SSH command:
	I1108 00:07:32.431343   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | exit 0
	I1108 00:07:32.564042   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | SSH cmd err, output: <nil>: 
	I1108 00:07:32.564364   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetConfigRaw
	I1108 00:07:32.565013   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetIP
	I1108 00:07:32.567617   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.567995   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:32.568014   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.568286   49282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/stopped-upgrade-688874/config.json ...
	I1108 00:07:32.568450   49282 machine.go:88] provisioning docker machine ...
	I1108 00:07:32.568474   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	I1108 00:07:32.568663   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetMachineName
	I1108 00:07:32.568857   49282 buildroot.go:166] provisioning hostname "stopped-upgrade-688874"
	I1108 00:07:32.568876   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetMachineName
	I1108 00:07:32.569037   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHHostname
	I1108 00:07:32.571159   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.571458   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:32.571487   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.571585   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHPort
	I1108 00:07:32.571739   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:32.571882   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:32.571991   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHUsername
	I1108 00:07:32.572122   49282 main.go:141] libmachine: Using SSH client type: native
	I1108 00:07:32.572459   49282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.156 22 <nil> <nil>}
	I1108 00:07:32.572473   49282 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-688874 && echo "stopped-upgrade-688874" | sudo tee /etc/hostname
	I1108 00:07:32.699539   49282 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-688874
	
	I1108 00:07:32.699572   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHHostname
	I1108 00:07:32.702093   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.702411   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:32.702440   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.702571   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHPort
	I1108 00:07:32.702819   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:32.703041   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:32.703269   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHUsername
	I1108 00:07:32.703449   49282 main.go:141] libmachine: Using SSH client type: native
	I1108 00:07:32.703793   49282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.156 22 <nil> <nil>}
	I1108 00:07:32.703810   49282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-688874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-688874/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-688874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:07:32.829132   49282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:07:32.829161   49282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:07:32.829188   49282 buildroot.go:174] setting up certificates
	I1108 00:07:32.829202   49282 provision.go:83] configureAuth start
	I1108 00:07:32.829215   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetMachineName
	I1108 00:07:32.829494   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetIP
	I1108 00:07:32.831949   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.832368   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:32.832399   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.832522   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHHostname
	I1108 00:07:32.834699   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.835033   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:32.835058   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.835165   49282 provision.go:138] copyHostCerts
	I1108 00:07:32.835217   49282 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:07:32.835230   49282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:07:32.835306   49282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:07:32.835407   49282 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:07:32.835421   49282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:07:32.835459   49282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:07:32.835532   49282 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:07:32.835542   49282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:07:32.835573   49282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:07:32.835640   49282 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-688874 san=[192.168.83.156 192.168.83.156 localhost 127.0.0.1 minikube stopped-upgrade-688874]
	I1108 00:07:32.963823   49282 provision.go:172] copyRemoteCerts
	I1108 00:07:32.963883   49282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:07:32.963911   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHHostname
	I1108 00:07:32.966871   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.967204   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:32.967238   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:32.967413   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHPort
	I1108 00:07:32.967672   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:32.967841   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHUsername
	I1108 00:07:32.967984   49282 sshutil.go:53] new ssh client: &{IP:192.168.83.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/stopped-upgrade-688874/id_rsa Username:docker}
	I1108 00:07:33.055240   49282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:07:33.068531   49282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:07:33.081000   49282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:07:33.093413   49282 provision.go:86] duration metric: configureAuth took 264.201859ms
	I1108 00:07:33.093440   49282 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:07:33.093647   49282 config.go:182] Loaded profile config "stopped-upgrade-688874": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1108 00:07:33.093744   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHHostname
	I1108 00:07:33.096273   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:33.096654   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:33.096687   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:33.096855   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHPort
	I1108 00:07:33.097041   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:33.097197   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:33.097339   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHUsername
	I1108 00:07:33.097474   49282 main.go:141] libmachine: Using SSH client type: native
	I1108 00:07:33.097803   49282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.156 22 <nil> <nil>}
	I1108 00:07:33.097828   49282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:07:34.484315   49282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:07:34.484341   49282 machine.go:91] provisioned docker machine in 1.915878046s
	I1108 00:07:34.484351   49282 start.go:300] post-start starting for "stopped-upgrade-688874" (driver="kvm2")
	I1108 00:07:34.484361   49282 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:07:34.484392   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	I1108 00:07:34.484707   49282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:07:34.484742   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHHostname
	I1108 00:07:34.487666   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.488041   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:34.488070   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.488267   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHPort
	I1108 00:07:34.488467   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:34.488668   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHUsername
	I1108 00:07:34.488886   49282 sshutil.go:53] new ssh client: &{IP:192.168.83.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/stopped-upgrade-688874/id_rsa Username:docker}
	I1108 00:07:34.575847   49282 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:07:34.579957   49282 info.go:137] Remote host: Buildroot 2019.02.7
	I1108 00:07:34.579976   49282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:07:34.580023   49282 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:07:34.580105   49282 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:07:34.580194   49282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:07:34.585797   49282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:07:34.599227   49282 start.go:303] post-start completed in 114.865393ms
	I1108 00:07:34.599250   49282 fix.go:56] fixHost completed within 39.414788233s
	I1108 00:07:34.599274   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHHostname
	I1108 00:07:34.602016   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.602390   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:34.602411   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.602577   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHPort
	I1108 00:07:34.602761   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:34.602891   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:34.603057   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHUsername
	I1108 00:07:34.603247   49282 main.go:141] libmachine: Using SSH client type: native
	I1108 00:07:34.603551   49282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.156 22 <nil> <nil>}
	I1108 00:07:34.603563   49282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1108 00:07:34.725291   49282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402054.681560395
	
	I1108 00:07:34.725315   49282 fix.go:206] guest clock: 1699402054.681560395
	I1108 00:07:34.725322   49282 fix.go:219] Guest: 2023-11-08 00:07:34.681560395 +0000 UTC Remote: 2023-11-08 00:07:34.599254327 +0000 UTC m=+40.034901873 (delta=82.306068ms)
	I1108 00:07:34.725340   49282 fix.go:190] guest clock delta is within tolerance: 82.306068ms
	I1108 00:07:34.725347   49282 start.go:83] releasing machines lock for "stopped-upgrade-688874", held for 39.54090762s
	I1108 00:07:34.725378   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	I1108 00:07:34.725665   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetIP
	I1108 00:07:34.728029   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.728424   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:34.728450   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.728611   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	I1108 00:07:34.729070   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	I1108 00:07:34.729228   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .DriverName
	I1108 00:07:34.729318   49282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:07:34.729365   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHHostname
	I1108 00:07:34.729424   49282 ssh_runner.go:195] Run: cat /version.json
	I1108 00:07:34.729444   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHHostname
	I1108 00:07:34.731799   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.731942   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.732146   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:34.732174   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.732249   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:d3:74", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-08 01:07:19 +0000 UTC Type:0 Mac:52:54:00:35:d3:74 Iaid: IPaddr:192.168.83.156 Prefix:24 Hostname:stopped-upgrade-688874 Clientid:01:52:54:00:35:d3:74}
	I1108 00:07:34.732283   49282 main.go:141] libmachine: (stopped-upgrade-688874) DBG | domain stopped-upgrade-688874 has defined IP address 192.168.83.156 and MAC address 52:54:00:35:d3:74 in network minikube-net
	I1108 00:07:34.732313   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHPort
	I1108 00:07:34.732515   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:34.732528   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHPort
	I1108 00:07:34.732678   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHKeyPath
	I1108 00:07:34.732709   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHUsername
	I1108 00:07:34.732855   49282 main.go:141] libmachine: (stopped-upgrade-688874) Calling .GetSSHUsername
	I1108 00:07:34.732848   49282 sshutil.go:53] new ssh client: &{IP:192.168.83.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/stopped-upgrade-688874/id_rsa Username:docker}
	I1108 00:07:34.732985   49282 sshutil.go:53] new ssh client: &{IP:192.168.83.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/stopped-upgrade-688874/id_rsa Username:docker}
	W1108 00:07:34.817486   49282 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1108 00:07:34.817570   49282 ssh_runner.go:195] Run: systemctl --version
	I1108 00:07:34.836275   49282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:07:34.901933   49282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:07:34.907348   49282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:07:34.907420   49282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:07:34.912796   49282 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 00:07:34.912827   49282 start.go:472] detecting cgroup driver to use...
	I1108 00:07:34.912889   49282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:07:34.922836   49282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:07:34.931030   49282 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:07:34.931093   49282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:07:34.938956   49282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:07:34.946392   49282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1108 00:07:34.953521   49282 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1108 00:07:34.953565   49282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:07:35.046438   49282 docker.go:219] disabling docker service ...
	I1108 00:07:35.046500   49282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:07:35.059077   49282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:07:35.066709   49282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:07:35.145011   49282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:07:35.222634   49282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:07:35.230566   49282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:07:35.241149   49282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1108 00:07:35.241207   49282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:07:35.249154   49282 out.go:177] 
	W1108 00:07:35.250455   49282 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1108 00:07:35.250472   49282 out.go:239] * 
	* 
	W1108 00:07:35.251277   49282 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 00:07:35.252264   49282 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-688874 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (264.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-590541 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-590541 --alsologtostderr -v=3: exit status 82 (2m0.852021405s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-590541"  ...
	* Stopping node "old-k8s-version-590541"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:05:22.792319   48417 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:05:22.792530   48417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:05:22.792562   48417 out.go:309] Setting ErrFile to fd 2...
	I1108 00:05:22.792585   48417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:05:22.792803   48417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:05:22.793223   48417 out.go:303] Setting JSON to false
	I1108 00:05:22.793342   48417 mustload.go:65] Loading cluster: old-k8s-version-590541
	I1108 00:05:22.793704   48417 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:05:22.793789   48417 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/config.json ...
	I1108 00:05:22.793974   48417 mustload.go:65] Loading cluster: old-k8s-version-590541
	I1108 00:05:22.794113   48417 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:05:22.794152   48417 stop.go:39] StopHost: old-k8s-version-590541
	I1108 00:05:22.794583   48417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:05:22.794649   48417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:05:22.809096   48417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I1108 00:05:22.809484   48417 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:05:22.810024   48417 main.go:141] libmachine: Using API Version  1
	I1108 00:05:22.810045   48417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:05:22.810394   48417 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:05:22.813222   48417 out.go:177] * Stopping node "old-k8s-version-590541"  ...
	I1108 00:05:22.814528   48417 main.go:141] libmachine: Stopping "old-k8s-version-590541"...
	I1108 00:05:22.814545   48417 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:05:22.816246   48417 main.go:141] libmachine: (old-k8s-version-590541) Calling .Stop
	I1108 00:05:22.819439   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 0/60
	I1108 00:05:23.821289   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 1/60
	I1108 00:05:24.822775   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 2/60
	I1108 00:05:25.824665   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 3/60
	I1108 00:05:26.826191   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 4/60
	I1108 00:05:27.828521   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 5/60
	I1108 00:05:28.829873   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 6/60
	I1108 00:05:29.831569   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 7/60
	I1108 00:05:30.833873   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 8/60
	I1108 00:05:31.836168   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 9/60
	I1108 00:05:32.838328   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 10/60
	I1108 00:05:33.839948   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 11/60
	I1108 00:05:34.841499   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 12/60
	I1108 00:05:35.842836   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 13/60
	I1108 00:05:36.844634   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 14/60
	I1108 00:05:37.847203   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 15/60
	I1108 00:05:38.849038   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 16/60
	I1108 00:05:39.850667   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 17/60
	I1108 00:05:40.852179   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 18/60
	I1108 00:05:41.854241   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 19/60
	I1108 00:05:42.856251   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 20/60
	I1108 00:05:43.857754   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 21/60
	I1108 00:05:44.859322   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 22/60
	I1108 00:05:45.860892   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 23/60
	I1108 00:05:46.862541   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 24/60
	I1108 00:05:47.865287   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 25/60
	I1108 00:05:48.866999   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 26/60
	I1108 00:05:49.868854   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 27/60
	I1108 00:05:50.869949   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 28/60
	I1108 00:05:51.871199   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 29/60
	I1108 00:05:52.872891   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 30/60
	I1108 00:05:53.874544   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 31/60
	I1108 00:05:54.876345   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 32/60
	I1108 00:05:55.877944   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 33/60
	I1108 00:05:56.879274   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 34/60
	I1108 00:05:57.880931   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 35/60
	I1108 00:05:58.883213   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 36/60
	I1108 00:05:59.884992   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 37/60
	I1108 00:06:00.887299   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 38/60
	I1108 00:06:01.888886   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 39/60
	I1108 00:06:02.890666   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 40/60
	I1108 00:06:03.891877   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 41/60
	I1108 00:06:04.893800   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 42/60
	I1108 00:06:05.895516   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 43/60
	I1108 00:06:06.896883   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 44/60
	I1108 00:06:07.898564   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 45/60
	I1108 00:06:08.899770   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 46/60
	I1108 00:06:09.901942   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 47/60
	I1108 00:06:10.903178   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 48/60
	I1108 00:06:11.904620   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 49/60
	I1108 00:06:12.906357   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 50/60
	I1108 00:06:13.907995   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 51/60
	I1108 00:06:14.909296   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 52/60
	I1108 00:06:15.910643   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 53/60
	I1108 00:06:16.912982   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 54/60
	I1108 00:06:17.914993   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 55/60
	I1108 00:06:18.916210   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 56/60
	I1108 00:06:19.917747   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 57/60
	I1108 00:06:20.919448   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 58/60
	I1108 00:06:21.920905   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 59/60
	I1108 00:06:22.921752   48417 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1108 00:06:22.921810   48417 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:06:22.921825   48417 retry.go:31] will retry after 537.265446ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:06:23.459444   48417 stop.go:39] StopHost: old-k8s-version-590541
	I1108 00:06:23.459790   48417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:06:23.459832   48417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:06:23.474788   48417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37271
	I1108 00:06:23.475190   48417 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:06:23.475636   48417 main.go:141] libmachine: Using API Version  1
	I1108 00:06:23.475661   48417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:06:23.475966   48417 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:06:23.478176   48417 out.go:177] * Stopping node "old-k8s-version-590541"  ...
	I1108 00:06:23.479664   48417 main.go:141] libmachine: Stopping "old-k8s-version-590541"...
	I1108 00:06:23.479680   48417 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:06:23.481403   48417 main.go:141] libmachine: (old-k8s-version-590541) Calling .Stop
	I1108 00:06:23.484676   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 0/60
	I1108 00:06:24.486366   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 1/60
	I1108 00:06:25.487870   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 2/60
	I1108 00:06:26.489191   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 3/60
	I1108 00:06:27.490557   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 4/60
	I1108 00:06:28.491821   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 5/60
	I1108 00:06:29.493157   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 6/60
	I1108 00:06:30.494463   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 7/60
	I1108 00:06:31.496844   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 8/60
	I1108 00:06:32.497968   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 9/60
	I1108 00:06:33.499709   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 10/60
	I1108 00:06:34.500909   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 11/60
	I1108 00:06:35.502185   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 12/60
	I1108 00:06:36.503700   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 13/60
	I1108 00:06:37.504884   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 14/60
	I1108 00:06:38.506663   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 15/60
	I1108 00:06:39.508088   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 16/60
	I1108 00:06:40.509496   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 17/60
	I1108 00:06:41.511380   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 18/60
	I1108 00:06:42.512699   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 19/60
	I1108 00:06:43.514110   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 20/60
	I1108 00:06:44.515338   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 21/60
	I1108 00:06:45.516632   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 22/60
	I1108 00:06:46.518114   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 23/60
	I1108 00:06:47.519317   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 24/60
	I1108 00:06:48.521412   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 25/60
	I1108 00:06:49.522681   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 26/60
	I1108 00:06:50.524004   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 27/60
	I1108 00:06:51.525611   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 28/60
	I1108 00:06:52.527845   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 29/60
	I1108 00:06:53.529024   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 30/60
	I1108 00:06:54.531364   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 31/60
	I1108 00:06:55.532886   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 32/60
	I1108 00:06:56.534179   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 33/60
	I1108 00:06:57.535507   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 34/60
	I1108 00:06:58.537731   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 35/60
	I1108 00:06:59.538959   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 36/60
	I1108 00:07:00.540787   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 37/60
	I1108 00:07:01.542195   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 38/60
	I1108 00:07:02.543602   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 39/60
	I1108 00:07:03.545176   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 40/60
	I1108 00:07:04.546655   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 41/60
	I1108 00:07:05.548130   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 42/60
	I1108 00:07:06.549586   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 43/60
	I1108 00:07:07.551322   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 44/60
	I1108 00:07:08.553746   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 45/60
	I1108 00:07:09.555226   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 46/60
	I1108 00:07:10.556537   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 47/60
	I1108 00:07:11.557900   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 48/60
	I1108 00:07:12.559386   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 49/60
	I1108 00:07:13.560863   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 50/60
	I1108 00:07:14.562289   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 51/60
	I1108 00:07:15.563542   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 52/60
	I1108 00:07:16.565082   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 53/60
	I1108 00:07:17.566493   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 54/60
	I1108 00:07:18.567821   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 55/60
	I1108 00:07:19.569426   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 56/60
	I1108 00:07:20.570787   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 57/60
	I1108 00:07:21.572987   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 58/60
	I1108 00:07:22.575180   48417 main.go:141] libmachine: (old-k8s-version-590541) Waiting for machine to stop 59/60
	I1108 00:07:23.576474   48417 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1108 00:07:23.576520   48417 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:07:23.578503   48417 out.go:177] 
	W1108 00:07:23.579874   48417 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1108 00:07:23.579893   48417 out.go:239] * 
	* 
	W1108 00:07:23.582349   48417 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 00:07:23.583930   48417 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-590541 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-590541 -n old-k8s-version-590541
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-590541 -n old-k8s-version-590541: exit status 3 (18.623741052s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:07:42.209097   49493 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.49:22: connect: no route to host
	E1108 00:07:42.209116   49493 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.49:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-590541" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-320390 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-320390 --alsologtostderr -v=3: exit status 82 (2m1.707920418s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-320390"  ...
	* Stopping node "no-preload-320390"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:06:41.571093   49159 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:06:41.571372   49159 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:06:41.571382   49159 out.go:309] Setting ErrFile to fd 2...
	I1108 00:06:41.571387   49159 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:06:41.571609   49159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:06:41.571848   49159 out.go:303] Setting JSON to false
	I1108 00:06:41.571938   49159 mustload.go:65] Loading cluster: no-preload-320390
	I1108 00:06:41.572342   49159 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:06:41.572430   49159 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/config.json ...
	I1108 00:06:41.572597   49159 mustload.go:65] Loading cluster: no-preload-320390
	I1108 00:06:41.572699   49159 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:06:41.572728   49159 stop.go:39] StopHost: no-preload-320390
	I1108 00:06:41.573169   49159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:06:41.573224   49159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:06:41.588323   49159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I1108 00:06:41.588841   49159 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:06:41.589493   49159 main.go:141] libmachine: Using API Version  1
	I1108 00:06:41.589521   49159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:06:41.589918   49159 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:06:41.591812   49159 out.go:177] * Stopping node "no-preload-320390"  ...
	I1108 00:06:41.593373   49159 main.go:141] libmachine: Stopping "no-preload-320390"...
	I1108 00:06:41.593395   49159 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:06:41.595111   49159 main.go:141] libmachine: (no-preload-320390) Calling .Stop
	I1108 00:06:41.598679   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 0/60
	I1108 00:06:42.599929   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 1/60
	I1108 00:06:43.601278   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 2/60
	I1108 00:06:44.603305   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 3/60
	I1108 00:06:45.604475   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 4/60
	I1108 00:06:46.606322   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 5/60
	I1108 00:06:47.607457   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 6/60
	I1108 00:06:48.608980   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 7/60
	I1108 00:06:49.611217   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 8/60
	I1108 00:06:50.612478   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 9/60
	I1108 00:06:51.614502   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 10/60
	I1108 00:06:52.616510   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 11/60
	I1108 00:06:53.617903   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 12/60
	I1108 00:06:54.619262   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 13/60
	I1108 00:06:55.620769   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 14/60
	I1108 00:06:56.622561   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 15/60
	I1108 00:06:57.623883   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 16/60
	I1108 00:06:58.625285   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 17/60
	I1108 00:06:59.626632   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 18/60
	I1108 00:07:00.627909   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 19/60
	I1108 00:07:01.630081   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 20/60
	I1108 00:07:02.631403   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 21/60
	I1108 00:07:03.632604   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 22/60
	I1108 00:07:04.634070   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 23/60
	I1108 00:07:05.635485   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 24/60
	I1108 00:07:06.637489   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 25/60
	I1108 00:07:07.639343   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 26/60
	I1108 00:07:08.640840   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 27/60
	I1108 00:07:09.642163   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 28/60
	I1108 00:07:10.643400   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 29/60
	I1108 00:07:11.645589   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 30/60
	I1108 00:07:12.646926   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 31/60
	I1108 00:07:13.648126   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 32/60
	I1108 00:07:14.649561   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 33/60
	I1108 00:07:15.650902   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 34/60
	I1108 00:07:16.652860   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 35/60
	I1108 00:07:17.654542   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 36/60
	I1108 00:07:18.655930   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 37/60
	I1108 00:07:19.657912   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 38/60
	I1108 00:07:20.659208   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 39/60
	I1108 00:07:21.661399   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 40/60
	I1108 00:07:22.662539   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 41/60
	I1108 00:07:23.663751   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 42/60
	I1108 00:07:24.665199   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 43/60
	I1108 00:07:25.666463   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 44/60
	I1108 00:07:26.668345   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 45/60
	I1108 00:07:27.669726   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 46/60
	I1108 00:07:28.670992   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 47/60
	I1108 00:07:29.672149   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 48/60
	I1108 00:07:30.673600   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 49/60
	I1108 00:07:31.675635   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 50/60
	I1108 00:07:32.676904   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 51/60
	I1108 00:07:33.678302   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 52/60
	I1108 00:07:34.679610   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 53/60
	I1108 00:07:35.681012   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 54/60
	I1108 00:07:36.938353   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 55/60
	I1108 00:07:37.939834   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 56/60
	I1108 00:07:38.942379   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 57/60
	I1108 00:07:39.943996   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 58/60
	I1108 00:07:40.945757   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 59/60
	I1108 00:07:41.947161   49159 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1108 00:07:41.947201   49159 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:07:41.947220   49159 retry.go:31] will retry after 1.140216617s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:07:43.088466   49159 stop.go:39] StopHost: no-preload-320390
	I1108 00:07:43.088888   49159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:07:43.088931   49159 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:07:43.103271   49159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I1108 00:07:43.103693   49159 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:07:43.104199   49159 main.go:141] libmachine: Using API Version  1
	I1108 00:07:43.104247   49159 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:07:43.104559   49159 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:07:43.106749   49159 out.go:177] * Stopping node "no-preload-320390"  ...
	I1108 00:07:43.108421   49159 main.go:141] libmachine: Stopping "no-preload-320390"...
	I1108 00:07:43.108442   49159 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:07:43.110007   49159 main.go:141] libmachine: (no-preload-320390) Calling .Stop
	I1108 00:07:43.113668   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 0/60
	I1108 00:07:44.115918   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 1/60
	I1108 00:07:45.117232   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 2/60
	I1108 00:07:46.118783   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 3/60
	I1108 00:07:47.120194   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 4/60
	I1108 00:07:48.121734   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 5/60
	I1108 00:07:49.123542   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 6/60
	I1108 00:07:50.125084   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 7/60
	I1108 00:07:51.127443   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 8/60
	I1108 00:07:52.129012   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 9/60
	I1108 00:07:53.131013   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 10/60
	I1108 00:07:54.132296   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 11/60
	I1108 00:07:55.133524   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 12/60
	I1108 00:07:56.134787   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 13/60
	I1108 00:07:57.136123   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 14/60
	I1108 00:07:58.137775   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 15/60
	I1108 00:07:59.139225   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 16/60
	I1108 00:08:00.140908   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 17/60
	I1108 00:08:01.142316   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 18/60
	I1108 00:08:02.143467   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 19/60
	I1108 00:08:03.145323   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 20/60
	I1108 00:08:04.147415   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 21/60
	I1108 00:08:05.149723   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 22/60
	I1108 00:08:06.151363   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 23/60
	I1108 00:08:07.153281   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 24/60
	I1108 00:08:08.155336   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 25/60
	I1108 00:08:09.156704   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 26/60
	I1108 00:08:10.158249   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 27/60
	I1108 00:08:11.159861   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 28/60
	I1108 00:08:12.161244   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 29/60
	I1108 00:08:13.162892   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 30/60
	I1108 00:08:14.164088   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 31/60
	I1108 00:08:15.165483   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 32/60
	I1108 00:08:16.167161   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 33/60
	I1108 00:08:17.168503   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 34/60
	I1108 00:08:18.170255   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 35/60
	I1108 00:08:19.172024   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 36/60
	I1108 00:08:20.173456   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 37/60
	I1108 00:08:21.175552   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 38/60
	I1108 00:08:22.176784   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 39/60
	I1108 00:08:23.178235   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 40/60
	I1108 00:08:24.179720   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 41/60
	I1108 00:08:25.181192   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 42/60
	I1108 00:08:26.182714   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 43/60
	I1108 00:08:27.184086   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 44/60
	I1108 00:08:28.185859   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 45/60
	I1108 00:08:29.187189   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 46/60
	I1108 00:08:30.188483   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 47/60
	I1108 00:08:31.189871   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 48/60
	I1108 00:08:32.191765   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 49/60
	I1108 00:08:33.193427   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 50/60
	I1108 00:08:34.195267   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 51/60
	I1108 00:08:35.196827   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 52/60
	I1108 00:08:36.199032   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 53/60
	I1108 00:08:37.200307   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 54/60
	I1108 00:08:38.201951   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 55/60
	I1108 00:08:39.203532   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 56/60
	I1108 00:08:40.205609   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 57/60
	I1108 00:08:41.207058   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 58/60
	I1108 00:08:42.208493   49159 main.go:141] libmachine: (no-preload-320390) Waiting for machine to stop 59/60
	I1108 00:08:43.209861   49159 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1108 00:08:43.209912   49159 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:08:43.212157   49159 out.go:177] 
	W1108 00:08:43.213738   49159 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1108 00:08:43.213771   49159 out.go:239] * 
	* 
	W1108 00:08:43.216276   49159 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 00:08:43.217822   49159 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-320390 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320390 -n no-preload-320390
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320390 -n no-preload-320390: exit status 3 (18.604778372s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:09:01.825193   50253 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host
	E1108 00:09:01.825211   50253 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-320390" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-253253 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-253253 --alsologtostderr -v=3: exit status 82 (2m1.182409354s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-253253"  ...
	* Stopping node "embed-certs-253253"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:06:49.807485   49236 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:06:49.807619   49236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:06:49.807630   49236 out.go:309] Setting ErrFile to fd 2...
	I1108 00:06:49.807638   49236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:06:49.807827   49236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:06:49.808051   49236 out.go:303] Setting JSON to false
	I1108 00:06:49.808129   49236 mustload.go:65] Loading cluster: embed-certs-253253
	I1108 00:06:49.808446   49236 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:06:49.808526   49236 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/config.json ...
	I1108 00:06:49.808681   49236 mustload.go:65] Loading cluster: embed-certs-253253
	I1108 00:06:49.808786   49236 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:06:49.808848   49236 stop.go:39] StopHost: embed-certs-253253
	I1108 00:06:49.809228   49236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:06:49.809278   49236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:06:49.827004   49236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I1108 00:06:49.827572   49236 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:06:49.828160   49236 main.go:141] libmachine: Using API Version  1
	I1108 00:06:49.828186   49236 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:06:49.828547   49236 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:06:49.830516   49236 out.go:177] * Stopping node "embed-certs-253253"  ...
	I1108 00:06:49.832301   49236 main.go:141] libmachine: Stopping "embed-certs-253253"...
	I1108 00:06:49.832320   49236 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:06:49.834134   49236 main.go:141] libmachine: (embed-certs-253253) Calling .Stop
	I1108 00:06:49.837637   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 0/60
	I1108 00:06:50.838895   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 1/60
	I1108 00:06:51.840263   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 2/60
	I1108 00:06:52.841507   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 3/60
	I1108 00:06:53.842957   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 4/60
	I1108 00:06:54.844860   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 5/60
	I1108 00:06:55.846373   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 6/60
	I1108 00:06:56.847653   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 7/60
	I1108 00:06:57.849108   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 8/60
	I1108 00:06:58.850499   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 9/60
	I1108 00:06:59.851719   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 10/60
	I1108 00:07:00.853203   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 11/60
	I1108 00:07:01.854642   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 12/60
	I1108 00:07:02.855828   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 13/60
	I1108 00:07:03.857320   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 14/60
	I1108 00:07:04.859597   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 15/60
	I1108 00:07:05.861004   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 16/60
	I1108 00:07:06.862568   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 17/60
	I1108 00:07:07.863916   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 18/60
	I1108 00:07:08.865460   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 19/60
	I1108 00:07:09.867214   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 20/60
	I1108 00:07:10.868388   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 21/60
	I1108 00:07:11.869965   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 22/60
	I1108 00:07:12.871479   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 23/60
	I1108 00:07:13.873706   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 24/60
	I1108 00:07:14.875451   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 25/60
	I1108 00:07:15.876789   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 26/60
	I1108 00:07:16.878184   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 27/60
	I1108 00:07:17.879730   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 28/60
	I1108 00:07:18.881167   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 29/60
	I1108 00:07:19.883116   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 30/60
	I1108 00:07:20.884475   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 31/60
	I1108 00:07:21.885726   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 32/60
	I1108 00:07:22.887200   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 33/60
	I1108 00:07:23.888587   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 34/60
	I1108 00:07:24.890565   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 35/60
	I1108 00:07:25.891880   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 36/60
	I1108 00:07:26.893375   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 37/60
	I1108 00:07:27.894698   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 38/60
	I1108 00:07:28.896023   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 39/60
	I1108 00:07:29.898106   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 40/60
	I1108 00:07:30.899234   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 41/60
	I1108 00:07:31.900550   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 42/60
	I1108 00:07:32.901816   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 43/60
	I1108 00:07:33.903031   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 44/60
	I1108 00:07:34.904789   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 45/60
	I1108 00:07:35.906127   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 46/60
	I1108 00:07:36.938215   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 47/60
	I1108 00:07:37.939959   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 48/60
	I1108 00:07:38.942644   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 49/60
	I1108 00:07:39.944707   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 50/60
	I1108 00:07:40.946402   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 51/60
	I1108 00:07:41.947938   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 52/60
	I1108 00:07:42.949284   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 53/60
	I1108 00:07:43.950647   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 54/60
	I1108 00:07:44.952618   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 55/60
	I1108 00:07:45.953794   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 56/60
	I1108 00:07:46.955730   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 57/60
	I1108 00:07:47.958406   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 58/60
	I1108 00:07:48.960065   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 59/60
	I1108 00:07:49.961309   49236 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1108 00:07:49.961371   49236 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:07:49.961394   49236 retry.go:31] will retry after 845.4738ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:07:50.807348   49236 stop.go:39] StopHost: embed-certs-253253
	I1108 00:07:50.807729   49236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:07:50.807779   49236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:07:50.821833   49236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I1108 00:07:50.822301   49236 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:07:50.822795   49236 main.go:141] libmachine: Using API Version  1
	I1108 00:07:50.822824   49236 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:07:50.823114   49236 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:07:50.825307   49236 out.go:177] * Stopping node "embed-certs-253253"  ...
	I1108 00:07:50.826910   49236 main.go:141] libmachine: Stopping "embed-certs-253253"...
	I1108 00:07:50.826929   49236 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:07:50.828576   49236 main.go:141] libmachine: (embed-certs-253253) Calling .Stop
	I1108 00:07:50.832222   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 0/60
	I1108 00:07:51.833647   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 1/60
	I1108 00:07:52.834927   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 2/60
	I1108 00:07:53.836536   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 3/60
	I1108 00:07:54.838056   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 4/60
	I1108 00:07:55.840119   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 5/60
	I1108 00:07:56.841436   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 6/60
	I1108 00:07:57.843024   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 7/60
	I1108 00:07:58.844230   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 8/60
	I1108 00:07:59.845595   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 9/60
	I1108 00:08:00.847848   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 10/60
	I1108 00:08:01.849206   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 11/60
	I1108 00:08:02.850639   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 12/60
	I1108 00:08:03.852054   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 13/60
	I1108 00:08:04.853516   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 14/60
	I1108 00:08:05.855898   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 15/60
	I1108 00:08:06.857293   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 16/60
	I1108 00:08:07.859383   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 17/60
	I1108 00:08:08.860832   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 18/60
	I1108 00:08:09.862656   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 19/60
	I1108 00:08:10.864420   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 20/60
	I1108 00:08:11.865703   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 21/60
	I1108 00:08:12.867305   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 22/60
	I1108 00:08:13.869054   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 23/60
	I1108 00:08:14.870564   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 24/60
	I1108 00:08:15.872271   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 25/60
	I1108 00:08:16.873859   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 26/60
	I1108 00:08:17.875615   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 27/60
	I1108 00:08:18.877439   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 28/60
	I1108 00:08:19.879253   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 29/60
	I1108 00:08:20.880722   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 30/60
	I1108 00:08:21.882104   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 31/60
	I1108 00:08:22.883397   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 32/60
	I1108 00:08:23.884665   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 33/60
	I1108 00:08:24.886205   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 34/60
	I1108 00:08:25.888602   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 35/60
	I1108 00:08:26.889994   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 36/60
	I1108 00:08:27.891634   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 37/60
	I1108 00:08:28.892970   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 38/60
	I1108 00:08:29.894363   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 39/60
	I1108 00:08:30.896033   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 40/60
	I1108 00:08:31.897386   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 41/60
	I1108 00:08:32.899573   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 42/60
	I1108 00:08:33.900980   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 43/60
	I1108 00:08:34.903266   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 44/60
	I1108 00:08:35.904890   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 45/60
	I1108 00:08:36.906233   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 46/60
	I1108 00:08:37.907520   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 47/60
	I1108 00:08:38.908847   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 48/60
	I1108 00:08:39.910131   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 49/60
	I1108 00:08:40.911735   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 50/60
	I1108 00:08:41.913217   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 51/60
	I1108 00:08:42.914626   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 52/60
	I1108 00:08:43.916086   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 53/60
	I1108 00:08:44.917622   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 54/60
	I1108 00:08:45.919200   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 55/60
	I1108 00:08:46.920484   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 56/60
	I1108 00:08:47.921942   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 57/60
	I1108 00:08:48.923310   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 58/60
	I1108 00:08:49.924704   49236 main.go:141] libmachine: (embed-certs-253253) Waiting for machine to stop 59/60
	I1108 00:08:50.925146   49236 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1108 00:08:50.925195   49236 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:08:50.927124   49236 out.go:177] 
	W1108 00:08:50.928507   49236 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1108 00:08:50.928518   49236 out.go:239] * 
	* 
	W1108 00:08:50.930854   49236 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 00:08:50.932176   49236 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-253253 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253253 -n embed-certs-253253
E1108 00:08:53.871560   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253253 -n embed-certs-253253: exit status 3 (18.570066904s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:09:09.505166   50305 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	E1108 00:09:09.505191   50305 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-253253" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-590541 -n old-k8s-version-590541
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-590541 -n old-k8s-version-590541: exit status 3 (3.168427984s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:07:45.377125   49871 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.49:22: connect: no route to host
	E1108 00:07:45.377144   49871 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.49:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-590541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-590541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157811965s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.49:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-590541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-590541 -n old-k8s-version-590541
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-590541 -n old-k8s-version-590541: exit status 3 (3.057975431s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:07:54.593146   49989 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.49:22: connect: no route to host
	E1108 00:07:54.593171   49989 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.49:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-590541" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320390 -n no-preload-320390
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320390 -n no-preload-320390: exit status 3 (3.171330303s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:09:04.997083   50356 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host
	E1108 00:09:04.997104   50356 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-320390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-320390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149197285s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-320390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320390 -n no-preload-320390
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320390 -n no-preload-320390: exit status 3 (3.06258256s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:09:14.209236   50456 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host
	E1108 00:09:14.209260   50456 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.176:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-320390" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253253 -n embed-certs-253253
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253253 -n embed-certs-253253: exit status 3 (3.168196199s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:09:12.673101   50415 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	E1108 00:09:12.673120   50415 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-253253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-253253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152927656s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-253253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253253 -n embed-certs-253253
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253253 -n embed-certs-253253: exit status 3 (3.063841054s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:09:21.889172   50572 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	E1108 00:09:21.889195   50572 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-253253" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-039263 --alsologtostderr -v=3
E1108 00:10:25.485197   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1108 00:10:38.957185   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1108 00:10:42.433762   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-039263 --alsologtostderr -v=3: exit status 82 (2m1.488582839s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-039263"  ...
	* Stopping node "default-k8s-diff-port-039263"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:09:28.486859   50708 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:09:28.486989   50708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:09:28.486998   50708 out.go:309] Setting ErrFile to fd 2...
	I1108 00:09:28.487002   50708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:09:28.487156   50708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:09:28.487417   50708 out.go:303] Setting JSON to false
	I1108 00:09:28.487503   50708 mustload.go:65] Loading cluster: default-k8s-diff-port-039263
	I1108 00:09:28.487814   50708 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:09:28.487879   50708 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:09:28.488030   50708 mustload.go:65] Loading cluster: default-k8s-diff-port-039263
	I1108 00:09:28.488128   50708 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:09:28.488160   50708 stop.go:39] StopHost: default-k8s-diff-port-039263
	I1108 00:09:28.488527   50708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:09:28.488569   50708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:09:28.502333   50708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40967
	I1108 00:09:28.502794   50708 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:09:28.503351   50708 main.go:141] libmachine: Using API Version  1
	I1108 00:09:28.503373   50708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:09:28.503678   50708 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:09:28.506139   50708 out.go:177] * Stopping node "default-k8s-diff-port-039263"  ...
	I1108 00:09:28.507518   50708 main.go:141] libmachine: Stopping "default-k8s-diff-port-039263"...
	I1108 00:09:28.507533   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:09:28.509092   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Stop
	I1108 00:09:28.512583   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 0/60
	I1108 00:09:29.514052   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 1/60
	I1108 00:09:30.516186   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 2/60
	I1108 00:09:31.517669   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 3/60
	I1108 00:09:32.519818   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 4/60
	I1108 00:09:33.521898   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 5/60
	I1108 00:09:34.523138   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 6/60
	I1108 00:09:35.524558   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 7/60
	I1108 00:09:36.525776   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 8/60
	I1108 00:09:37.527135   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 9/60
	I1108 00:09:38.529476   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 10/60
	I1108 00:09:39.530674   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 11/60
	I1108 00:09:40.532235   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 12/60
	I1108 00:09:41.533700   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 13/60
	I1108 00:09:42.535156   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 14/60
	I1108 00:09:43.537148   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 15/60
	I1108 00:09:44.538615   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 16/60
	I1108 00:09:45.539856   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 17/60
	I1108 00:09:46.541064   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 18/60
	I1108 00:09:47.542526   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 19/60
	I1108 00:09:48.544494   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 20/60
	I1108 00:09:49.545687   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 21/60
	I1108 00:09:50.546836   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 22/60
	I1108 00:09:51.547952   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 23/60
	I1108 00:09:52.549661   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 24/60
	I1108 00:09:53.552202   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 25/60
	I1108 00:09:54.553614   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 26/60
	I1108 00:09:55.555028   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 27/60
	I1108 00:09:56.556475   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 28/60
	I1108 00:09:57.558069   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 29/60
	I1108 00:09:58.560177   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 30/60
	I1108 00:09:59.561494   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 31/60
	I1108 00:10:00.562869   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 32/60
	I1108 00:10:01.564179   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 33/60
	I1108 00:10:02.565590   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 34/60
	I1108 00:10:03.567896   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 35/60
	I1108 00:10:04.569165   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 36/60
	I1108 00:10:05.571567   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 37/60
	I1108 00:10:06.573031   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 38/60
	I1108 00:10:07.574556   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 39/60
	I1108 00:10:08.576487   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 40/60
	I1108 00:10:09.577737   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 41/60
	I1108 00:10:10.579274   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 42/60
	I1108 00:10:11.580657   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 43/60
	I1108 00:10:12.582092   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 44/60
	I1108 00:10:13.583986   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 45/60
	I1108 00:10:14.585489   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 46/60
	I1108 00:10:15.586761   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 47/60
	I1108 00:10:16.588138   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 48/60
	I1108 00:10:17.589386   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 49/60
	I1108 00:10:18.591506   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 50/60
	I1108 00:10:19.593172   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 51/60
	I1108 00:10:20.594711   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 52/60
	I1108 00:10:21.596436   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 53/60
	I1108 00:10:22.597757   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 54/60
	I1108 00:10:23.599517   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 55/60
	I1108 00:10:24.601072   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 56/60
	I1108 00:10:25.602561   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 57/60
	I1108 00:10:26.604061   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 58/60
	I1108 00:10:27.605522   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 59/60
	I1108 00:10:28.606777   50708 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1108 00:10:28.606817   50708 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:10:28.606834   50708 retry.go:31] will retry after 1.194655552s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:10:29.802137   50708 stop.go:39] StopHost: default-k8s-diff-port-039263
	I1108 00:10:29.802498   50708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:10:29.802559   50708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:10:29.816648   50708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
	I1108 00:10:29.817031   50708 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:10:29.817437   50708 main.go:141] libmachine: Using API Version  1
	I1108 00:10:29.817458   50708 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:10:29.817771   50708 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:10:29.819793   50708 out.go:177] * Stopping node "default-k8s-diff-port-039263"  ...
	I1108 00:10:29.821036   50708 main.go:141] libmachine: Stopping "default-k8s-diff-port-039263"...
	I1108 00:10:29.821060   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:10:29.822525   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Stop
	I1108 00:10:29.825589   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 0/60
	I1108 00:10:30.826943   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 1/60
	I1108 00:10:31.828436   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 2/60
	I1108 00:10:32.829954   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 3/60
	I1108 00:10:33.831365   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 4/60
	I1108 00:10:34.833414   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 5/60
	I1108 00:10:35.834716   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 6/60
	I1108 00:10:36.836020   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 7/60
	I1108 00:10:37.837384   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 8/60
	I1108 00:10:38.838712   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 9/60
	I1108 00:10:39.840473   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 10/60
	I1108 00:10:40.841984   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 11/60
	I1108 00:10:41.843916   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 12/60
	I1108 00:10:42.845273   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 13/60
	I1108 00:10:43.846530   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 14/60
	I1108 00:10:44.848230   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 15/60
	I1108 00:10:45.849561   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 16/60
	I1108 00:10:46.850873   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 17/60
	I1108 00:10:47.852151   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 18/60
	I1108 00:10:48.853473   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 19/60
	I1108 00:10:49.855315   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 20/60
	I1108 00:10:50.857029   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 21/60
	I1108 00:10:51.858258   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 22/60
	I1108 00:10:52.859740   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 23/60
	I1108 00:10:53.861417   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 24/60
	I1108 00:10:54.863245   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 25/60
	I1108 00:10:55.864617   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 26/60
	I1108 00:10:56.866397   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 27/60
	I1108 00:10:57.867869   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 28/60
	I1108 00:10:58.869322   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 29/60
	I1108 00:10:59.871001   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 30/60
	I1108 00:11:00.872443   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 31/60
	I1108 00:11:01.873584   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 32/60
	I1108 00:11:02.875173   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 33/60
	I1108 00:11:03.876364   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 34/60
	I1108 00:11:04.877914   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 35/60
	I1108 00:11:05.879188   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 36/60
	I1108 00:11:06.880384   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 37/60
	I1108 00:11:07.881748   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 38/60
	I1108 00:11:08.882959   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 39/60
	I1108 00:11:09.884591   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 40/60
	I1108 00:11:10.885818   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 41/60
	I1108 00:11:11.887089   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 42/60
	I1108 00:11:12.888236   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 43/60
	I1108 00:11:13.889735   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 44/60
	I1108 00:11:14.891603   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 45/60
	I1108 00:11:15.892924   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 46/60
	I1108 00:11:16.894189   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 47/60
	I1108 00:11:17.895353   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 48/60
	I1108 00:11:18.896932   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 49/60
	I1108 00:11:19.898660   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 50/60
	I1108 00:11:20.899988   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 51/60
	I1108 00:11:21.901311   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 52/60
	I1108 00:11:22.902585   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 53/60
	I1108 00:11:23.903825   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 54/60
	I1108 00:11:24.905376   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 55/60
	I1108 00:11:25.906826   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 56/60
	I1108 00:11:26.908049   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 57/60
	I1108 00:11:27.909382   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 58/60
	I1108 00:11:28.910626   50708 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for machine to stop 59/60
	I1108 00:11:29.911554   50708 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1108 00:11:29.911593   50708 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1108 00:11:29.913463   50708 out.go:177] 
	W1108 00:11:29.914846   50708 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1108 00:11:29.914857   50708 out.go:239] * 
	* 
	W1108 00:11:29.917670   50708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 00:11:29.919016   50708 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-039263 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263: exit status 3 (18.561009414s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:11:48.481146   51053 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.116:22: connect: no route to host
	E1108 00:11:48.481182   51053 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-039263" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263: exit status 3 (3.167629999s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:11:51.649163   51127 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.116:22: connect: no route to host
	E1108 00:11:51.649183   51127 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.116:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-039263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-039263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152546264s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.116:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-039263 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263: exit status 3 (3.063733205s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 00:12:00.865173   51198 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.116:22: connect: no route to host
	E1108 00:12:00.865196   51198 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-039263" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-253253 -n embed-certs-253253
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-08 00:27:46.843325254 +0000 UTC m=+5201.006634130
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253253 -n embed-certs-253253
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-253253 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-253253 logs -n 25: (1.717086021s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-161055                           | kubernetes-upgrade-161055    | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:04 UTC |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:05 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-590541        | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-320390             | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-253253            | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-560216 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	|         | disable-driver-mounts-560216                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:09 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-590541             | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320390                  | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-253253                 | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-039263  | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-039263       | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:12 UTC | 08 Nov 23 00:19 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:12:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:12:00.921478   51228 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:12:00.921584   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921592   51228 out.go:309] Setting ErrFile to fd 2...
	I1108 00:12:00.921597   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921752   51228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:12:00.922282   51228 out.go:303] Setting JSON to false
	I1108 00:12:00.923151   51228 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6870,"bootTime":1699395451,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:12:00.923210   51228 start.go:138] virtualization: kvm guest
	I1108 00:12:00.925322   51228 out.go:177] * [default-k8s-diff-port-039263] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:12:00.926718   51228 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:12:00.928030   51228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:12:00.926756   51228 notify.go:220] Checking for updates...
	I1108 00:12:00.930659   51228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:12:00.932049   51228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:12:00.933341   51228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:12:00.934394   51228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:12:00.936334   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:00.936806   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.936857   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.950893   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I1108 00:12:00.951284   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.951775   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.951796   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.952131   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.952308   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:00.952537   51228 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:12:00.952850   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.952894   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.966402   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I1108 00:12:00.966726   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.967218   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.967238   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.967525   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.967705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:01.002079   51228 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:12:01.003352   51228 start.go:298] selected driver: kvm2
	I1108 00:12:01.003362   51228 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:def
ault-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.003471   51228 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:12:01.004117   51228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.004197   51228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:12:01.018635   51228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:12:01.018987   51228 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 00:12:01.019047   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:12:01.019060   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:12:01.019072   51228 start_flags.go:323] config:
	{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.019251   51228 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.021306   51228 out.go:177] * Starting control plane node default-k8s-diff-port-039263 in cluster default-k8s-diff-port-039263
	I1108 00:12:00.865093   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:03.937104   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:01.022723   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:12:01.022765   51228 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1108 00:12:01.022777   51228 cache.go:56] Caching tarball of preloaded images
	I1108 00:12:01.022864   51228 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 00:12:01.022875   51228 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1108 00:12:01.022984   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:12:01.023164   51228 start.go:365] acquiring machines lock for default-k8s-diff-port-039263: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:10.017091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:13.089091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:19.169065   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:22.241084   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:28.321050   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:31.393060   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:37.473056   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:40.475708   50505 start.go:369] acquired machines lock for "no-preload-320390" in 3m26.103068871s
	I1108 00:12:40.475773   50505 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:40.475781   50505 fix.go:54] fixHost starting: 
	I1108 00:12:40.476087   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:40.476116   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:40.490309   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45419
	I1108 00:12:40.490708   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:40.491196   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:12:40.491217   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:40.491530   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:40.491718   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:40.491870   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:12:40.493597   50505 fix.go:102] recreateIfNeeded on no-preload-320390: state=Stopped err=<nil>
	I1108 00:12:40.493628   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	W1108 00:12:40.493762   50505 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:40.495670   50505 out.go:177] * Restarting existing kvm2 VM for "no-preload-320390" ...
	I1108 00:12:40.496930   50505 main.go:141] libmachine: (no-preload-320390) Calling .Start
	I1108 00:12:40.497098   50505 main.go:141] libmachine: (no-preload-320390) Ensuring networks are active...
	I1108 00:12:40.497753   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network default is active
	I1108 00:12:40.498094   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network mk-no-preload-320390 is active
	I1108 00:12:40.498442   50505 main.go:141] libmachine: (no-preload-320390) Getting domain xml...
	I1108 00:12:40.499199   50505 main.go:141] libmachine: (no-preload-320390) Creating domain...
	I1108 00:12:41.718179   50505 main.go:141] libmachine: (no-preload-320390) Waiting to get IP...
	I1108 00:12:41.719024   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.719423   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.719497   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.719407   51373 retry.go:31] will retry after 204.819851ms: waiting for machine to come up
	I1108 00:12:41.925924   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.926414   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.926445   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.926361   51373 retry.go:31] will retry after 237.59613ms: waiting for machine to come up
	I1108 00:12:42.165848   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.166251   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.166282   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.166195   51373 retry.go:31] will retry after 306.914093ms: waiting for machine to come up
	I1108 00:12:42.474651   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.475026   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.475057   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.474981   51373 retry.go:31] will retry after 490.427385ms: waiting for machine to come up
	I1108 00:12:42.967292   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.967709   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.967733   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.967661   51373 retry.go:31] will retry after 684.227655ms: waiting for machine to come up
	I1108 00:12:43.653384   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:43.653823   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:43.653847   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:43.653774   51373 retry.go:31] will retry after 640.101868ms: waiting for machine to come up
	I1108 00:12:40.473798   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:40.473838   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:12:40.475605   50022 machine.go:91] provisioned docker machine in 4m37.566672036s
	I1108 00:12:40.475639   50022 fix.go:56] fixHost completed within 4m37.589859084s
	I1108 00:12:40.475644   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 4m37.589890946s
	W1108 00:12:40.475670   50022 start.go:691] error starting host: provision: host is not running
	W1108 00:12:40.475777   50022 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1108 00:12:40.475788   50022 start.go:706] Will try again in 5 seconds ...
	I1108 00:12:44.295060   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:44.295559   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:44.295610   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:44.295506   51373 retry.go:31] will retry after 797.709386ms: waiting for machine to come up
	I1108 00:12:45.095135   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:45.095552   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:45.095575   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:45.095476   51373 retry.go:31] will retry after 1.052157242s: waiting for machine to come up
	I1108 00:12:46.149040   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:46.149393   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:46.149426   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:46.149336   51373 retry.go:31] will retry after 1.246701556s: waiting for machine to come up
	I1108 00:12:47.397579   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:47.397942   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:47.397981   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:47.397900   51373 retry.go:31] will retry after 1.742754262s: waiting for machine to come up
	I1108 00:12:49.142995   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:49.143390   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:49.143419   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:49.143349   51373 retry.go:31] will retry after 2.412997156s: waiting for machine to come up
	I1108 00:12:45.476072   50022 start.go:365] acquiring machines lock for old-k8s-version-590541: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:51.558471   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:51.558857   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:51.558880   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:51.558809   51373 retry.go:31] will retry after 3.169873944s: waiting for machine to come up
	I1108 00:12:54.732010   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:54.732320   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:54.732340   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:54.732292   51373 retry.go:31] will retry after 3.452837487s: waiting for machine to come up
	I1108 00:12:58.188516   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.188983   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has current primary IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.189014   50505 main.go:141] libmachine: (no-preload-320390) Found IP for machine: 192.168.61.176
	I1108 00:12:58.189036   50505 main.go:141] libmachine: (no-preload-320390) Reserving static IP address...
	I1108 00:12:58.189332   50505 main.go:141] libmachine: (no-preload-320390) Reserved static IP address: 192.168.61.176
	I1108 00:12:58.189364   50505 main.go:141] libmachine: (no-preload-320390) Waiting for SSH to be available...
	I1108 00:12:58.189388   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.189415   50505 main.go:141] libmachine: (no-preload-320390) DBG | skip adding static IP to network mk-no-preload-320390 - found existing host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"}
	I1108 00:12:58.189432   50505 main.go:141] libmachine: (no-preload-320390) DBG | Getting to WaitForSSH function...
	I1108 00:12:58.191264   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191565   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.191598   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191730   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH client type: external
	I1108 00:12:58.191760   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa (-rw-------)
	I1108 00:12:58.191794   50505 main.go:141] libmachine: (no-preload-320390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:12:58.191808   50505 main.go:141] libmachine: (no-preload-320390) DBG | About to run SSH command:
	I1108 00:12:58.191819   50505 main.go:141] libmachine: (no-preload-320390) DBG | exit 0
	I1108 00:12:58.284621   50505 main.go:141] libmachine: (no-preload-320390) DBG | SSH cmd err, output: <nil>: 
	I1108 00:12:58.284983   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetConfigRaw
	I1108 00:12:58.285600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.287966   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288289   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.288325   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288532   50505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/config.json ...
	I1108 00:12:58.288712   50505 machine.go:88] provisioning docker machine ...
	I1108 00:12:58.288732   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:58.288917   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289074   50505 buildroot.go:166] provisioning hostname "no-preload-320390"
	I1108 00:12:58.289097   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289217   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.291053   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291329   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.291358   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291460   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.291613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291749   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291849   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.292009   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.292394   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.292419   50505 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320390 && echo "no-preload-320390" | sudo tee /etc/hostname
	I1108 00:12:58.433310   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320390
	
	I1108 00:12:58.433333   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.435959   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436351   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.436383   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436531   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.436710   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436853   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436959   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.437088   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.437607   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.437633   50505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320390/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:12:58.578473   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:58.578506   50505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:12:58.578568   50505 buildroot.go:174] setting up certificates
	I1108 00:12:58.578582   50505 provision.go:83] configureAuth start
	I1108 00:12:58.578600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.578889   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.581534   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581857   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.581881   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581948   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.583777   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584002   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.584023   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584121   50505 provision.go:138] copyHostCerts
	I1108 00:12:58.584172   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:12:58.584184   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:12:58.584247   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:12:58.584327   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:12:58.584337   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:12:58.584359   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:12:58.584407   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:12:58.584415   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:12:58.584434   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:12:58.584473   50505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-320390 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-320390]
	I1108 00:12:58.785035   50505 provision.go:172] copyRemoteCerts
	I1108 00:12:58.785095   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:12:58.785127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.787683   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788001   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.788037   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788194   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.788363   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.788534   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.788678   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:58.881791   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:12:58.905314   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:12:58.928183   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:12:58.951053   50505 provision.go:86] duration metric: configureAuth took 372.456375ms
	I1108 00:12:58.951079   50505 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:12:58.951288   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:58.951368   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.953851   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954158   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.954182   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954309   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.954504   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954689   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.954964   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.955269   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.955283   50505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:12:59.265311   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:12:59.265342   50505 machine.go:91] provisioned docker machine in 976.618103ms
	I1108 00:12:59.265353   50505 start.go:300] post-start starting for "no-preload-320390" (driver="kvm2")
	I1108 00:12:59.265362   50505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:12:59.265377   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.265683   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:12:59.265721   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.533994   50613 start.go:369] acquired machines lock for "embed-certs-253253" in 3m37.489465904s
	I1108 00:12:59.534047   50613 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:59.534093   50613 fix.go:54] fixHost starting: 
	I1108 00:12:59.534485   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:59.534531   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:59.553784   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I1108 00:12:59.554193   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:59.554676   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:12:59.554702   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:59.555006   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:59.555188   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:12:59.555320   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:12:59.556783   50613 fix.go:102] recreateIfNeeded on embed-certs-253253: state=Stopped err=<nil>
	I1108 00:12:59.556804   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	W1108 00:12:59.556989   50613 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:59.558834   50613 out.go:177] * Restarting existing kvm2 VM for "embed-certs-253253" ...
	I1108 00:12:59.268378   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268792   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.268836   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268991   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.269175   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.269337   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.269480   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.363687   50505 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:12:59.368009   50505 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:12:59.368028   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:12:59.368087   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:12:59.368176   50505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:12:59.368287   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:12:59.377685   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:12:59.399143   50505 start.go:303] post-start completed in 133.780055ms
	I1108 00:12:59.399161   50505 fix.go:56] fixHost completed within 18.923380073s
	I1108 00:12:59.399178   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.401608   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.401977   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.402007   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.402127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.402315   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402471   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402650   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.402824   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:59.403150   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:59.403162   50505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:12:59.533831   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402379.481958632
	
	I1108 00:12:59.533852   50505 fix.go:206] guest clock: 1699402379.481958632
	I1108 00:12:59.533859   50505 fix.go:219] Guest: 2023-11-08 00:12:59.481958632 +0000 UTC Remote: 2023-11-08 00:12:59.399164235 +0000 UTC m=+225.183083525 (delta=82.794397ms)
	I1108 00:12:59.533876   50505 fix.go:190] guest clock delta is within tolerance: 82.794397ms
	I1108 00:12:59.533880   50505 start.go:83] releasing machines lock for "no-preload-320390", held for 19.058127295s
	I1108 00:12:59.533902   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.534171   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:59.537173   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537616   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.537665   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537736   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538230   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538431   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538517   50505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:12:59.538613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.538659   50505 ssh_runner.go:195] Run: cat /version.json
	I1108 00:12:59.538683   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.541051   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541283   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541438   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541463   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541599   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541608   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541634   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541775   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.541845   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541939   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.541997   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.542078   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.542093   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.542265   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.637947   50505 ssh_runner.go:195] Run: systemctl --version
	I1108 00:12:59.660255   50505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:12:59.809407   50505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:12:59.816246   50505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:12:59.816323   50505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:12:59.831564   50505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:12:59.831586   50505 start.go:472] detecting cgroup driver to use...
	I1108 00:12:59.831651   50505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:12:59.847556   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:12:59.861077   50505 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:12:59.861143   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:12:59.876764   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:12:59.890894   50505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:00.001947   50505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:00.121923   50505 docker.go:219] disabling docker service ...
	I1108 00:13:00.122000   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:00.135525   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:00.148130   50505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:00.259318   50505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:00.368101   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:00.381138   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:00.398173   50505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:00.398245   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.407655   50505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:00.407699   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.416919   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.425767   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.434447   50505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:00.443679   50505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:00.451581   50505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:00.451619   50505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:00.464498   50505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:00.474332   50505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:00.599521   50505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:00.770248   50505 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:00.770341   50505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:00.775707   50505 start.go:540] Will wait 60s for crictl version
	I1108 00:13:00.775768   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:00.779578   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:00.821230   50505 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:00.821320   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.872851   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.920420   50505 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:12:59.560111   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Start
	I1108 00:12:59.560287   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring networks are active...
	I1108 00:12:59.561030   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network default is active
	I1108 00:12:59.561390   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network mk-embed-certs-253253 is active
	I1108 00:12:59.561717   50613 main.go:141] libmachine: (embed-certs-253253) Getting domain xml...
	I1108 00:12:59.562287   50613 main.go:141] libmachine: (embed-certs-253253) Creating domain...
	I1108 00:13:00.806061   50613 main.go:141] libmachine: (embed-certs-253253) Waiting to get IP...
	I1108 00:13:00.806862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:00.807268   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:00.807340   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:00.807226   51493 retry.go:31] will retry after 261.179966ms: waiting for machine to come up
	I1108 00:13:01.069535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.070048   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.070078   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.069997   51493 retry.go:31] will retry after 302.795302ms: waiting for machine to come up
	I1108 00:13:01.374567   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.375094   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.375119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.375043   51493 retry.go:31] will retry after 303.804523ms: waiting for machine to come up
	I1108 00:13:01.680374   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.680698   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.680726   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.680660   51493 retry.go:31] will retry after 446.122126ms: waiting for machine to come up
	I1108 00:13:00.921979   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:13:00.924760   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925121   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:13:00.925148   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925370   50505 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:00.929750   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:00.941338   50505 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:00.941372   50505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:00.979343   50505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:00.979370   50505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:13:00.979489   50505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.979539   50505 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.979636   50505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:00.979477   50505 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.979515   50505 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.979516   50505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980609   50505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.980677   50505 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.980704   50505 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.980733   50505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.980949   50505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980994   50505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.126154   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.131334   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.141929   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.150051   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.178472   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.198519   50505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1108 00:13:01.198569   50505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.198628   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.214419   50505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1108 00:13:01.214470   50505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.214527   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249270   50505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1108 00:13:01.249316   50505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.249321   50505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1108 00:13:01.249354   50505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.249363   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249398   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.257971   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1108 00:13:01.268557   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.279207   50505 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1108 00:13:01.279254   50505 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.279255   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.279295   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.279304   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.279365   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.279492   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.477649   50505 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1108 00:13:01.477691   50505 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.477740   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.477782   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1108 00:13:01.477963   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1108 00:13:01.478038   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1108 00:13:01.478005   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.478079   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.478116   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:01.478121   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:01.489810   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.490983   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1108 00:13:01.491011   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.491049   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.490984   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1108 00:13:01.556911   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1108 00:13:01.556996   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1108 00:13:01.557036   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:01.557048   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1108 00:13:01.576123   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1108 00:13:01.576251   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:02.001052   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:02.127888   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.128302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.128333   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.128247   51493 retry.go:31] will retry after 498.0349ms: waiting for machine to come up
	I1108 00:13:02.627872   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.628339   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.628373   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.628296   51493 retry.go:31] will retry after 852.947554ms: waiting for machine to come up
	I1108 00:13:03.483507   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:03.484074   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:03.484119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:03.484024   51493 retry.go:31] will retry after 1.040831469s: waiting for machine to come up
	I1108 00:13:04.526186   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:04.526503   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:04.526535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:04.526446   51493 retry.go:31] will retry after 960.701342ms: waiting for machine to come up
	I1108 00:13:05.489041   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:05.489473   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:05.489509   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:05.489456   51493 retry.go:31] will retry after 1.729813733s: waiting for machine to come up
	I1108 00:13:04.536381   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.045307892s)
	I1108 00:13:04.536412   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1108 00:13:04.536439   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536453   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (2.979392017s)
	I1108 00:13:04.536485   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1108 00:13:04.536491   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536531   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (2.960264305s)
	I1108 00:13:04.536549   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1108 00:13:04.536590   50505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.535505624s)
	I1108 00:13:04.536622   50505 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1108 00:13:04.536652   50505 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:04.536694   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:07.220832   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.68430655s)
	I1108 00:13:07.220863   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1108 00:13:07.220898   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.220902   50505 ssh_runner.go:235] Completed: which crictl: (2.684187653s)
	I1108 00:13:07.220982   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.221015   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:08.593275   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.372272111s)
	I1108 00:13:08.593311   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1108 00:13:08.593326   50505 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.372286228s)
	I1108 00:13:08.593374   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 00:13:08.593338   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:08.593474   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:08.593479   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:07.221541   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:07.221969   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:07.221998   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:07.221953   51493 retry.go:31] will retry after 1.97898588s: waiting for machine to come up
	I1108 00:13:09.202332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:09.202803   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:09.202831   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:09.202756   51493 retry.go:31] will retry after 2.565503631s: waiting for machine to come up
	I1108 00:13:11.769857   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:11.770332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:11.770354   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:11.770292   51493 retry.go:31] will retry after 3.236419831s: waiting for machine to come up
	I1108 00:13:10.382696   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.789194848s)
	I1108 00:13:10.382726   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1108 00:13:10.382747   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.789249445s)
	I1108 00:13:10.382776   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1108 00:13:10.382752   50505 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:10.382828   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:11.846184   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.463326325s)
	I1108 00:13:11.846222   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1108 00:13:11.846254   50505 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:11.846322   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:15.008441   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:15.008899   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:15.008936   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:15.008860   51493 retry.go:31] will retry after 3.079379099s: waiting for machine to come up
	I1108 00:13:19.138865   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.292505697s)
	I1108 00:13:19.138899   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1108 00:13:19.138926   50505 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.138987   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.465800   51228 start.go:369] acquired machines lock for "default-k8s-diff-port-039263" in 1m18.442604828s
	I1108 00:13:19.465853   51228 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:19.465863   51228 fix.go:54] fixHost starting: 
	I1108 00:13:19.466321   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:19.466373   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:19.485614   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I1108 00:13:19.486012   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:19.486457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:13:19.486478   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:19.486839   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:19.487016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:19.487158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:13:19.488697   51228 fix.go:102] recreateIfNeeded on default-k8s-diff-port-039263: state=Stopped err=<nil>
	I1108 00:13:19.488733   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	W1108 00:13:19.488889   51228 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:19.490913   51228 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-039263" ...
	I1108 00:13:19.492333   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Start
	I1108 00:13:19.492481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring networks are active...
	I1108 00:13:19.493162   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network default is active
	I1108 00:13:19.493592   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network mk-default-k8s-diff-port-039263 is active
	I1108 00:13:19.494016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Getting domain xml...
	I1108 00:13:19.494668   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Creating domain...
	I1108 00:13:20.910918   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting to get IP...
	I1108 00:13:20.911948   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912423   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912517   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:20.912403   51635 retry.go:31] will retry after 265.914494ms: waiting for machine to come up
	I1108 00:13:18.092086   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092516   50613 main.go:141] libmachine: (embed-certs-253253) Found IP for machine: 192.168.39.159
	I1108 00:13:18.092544   50613 main.go:141] libmachine: (embed-certs-253253) Reserving static IP address...
	I1108 00:13:18.092568   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has current primary IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092947   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.092980   50613 main.go:141] libmachine: (embed-certs-253253) DBG | skip adding static IP to network mk-embed-certs-253253 - found existing host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"}
	I1108 00:13:18.092999   50613 main.go:141] libmachine: (embed-certs-253253) Reserved static IP address: 192.168.39.159
	I1108 00:13:18.093019   50613 main.go:141] libmachine: (embed-certs-253253) Waiting for SSH to be available...
	I1108 00:13:18.093036   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Getting to WaitForSSH function...
	I1108 00:13:18.094941   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.095311   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095472   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH client type: external
	I1108 00:13:18.095487   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa (-rw-------)
	I1108 00:13:18.095519   50613 main.go:141] libmachine: (embed-certs-253253) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:18.095535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | About to run SSH command:
	I1108 00:13:18.095545   50613 main.go:141] libmachine: (embed-certs-253253) DBG | exit 0
	I1108 00:13:18.184364   50613 main.go:141] libmachine: (embed-certs-253253) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:18.184700   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetConfigRaw
	I1108 00:13:18.264914   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.267404   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267716   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.267752   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267951   50613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/config.json ...
	I1108 00:13:18.268153   50613 machine.go:88] provisioning docker machine ...
	I1108 00:13:18.268171   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:18.268382   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268642   50613 buildroot.go:166] provisioning hostname "embed-certs-253253"
	I1108 00:13:18.268662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268783   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.270977   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271275   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.271302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271485   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.271683   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.271873   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.272021   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.272185   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.272549   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.272564   50613 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-253253 && echo "embed-certs-253253" | sudo tee /etc/hostname
	I1108 00:13:18.408618   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253253
	
	I1108 00:13:18.408655   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.411325   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411629   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.411673   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411793   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.412024   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412204   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412353   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.412513   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.412864   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.412884   50613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-253253' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-253253/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-253253' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:18.537585   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:18.537611   50613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:18.537628   50613 buildroot.go:174] setting up certificates
	I1108 00:13:18.537636   50613 provision.go:83] configureAuth start
	I1108 00:13:18.537644   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.537930   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.540544   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.540937   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.540966   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.541078   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.543184   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543455   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.543486   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543559   50613 provision.go:138] copyHostCerts
	I1108 00:13:18.543621   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:18.543639   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:18.543692   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:18.543793   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:18.543801   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:18.543823   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:18.543876   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:18.543884   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:18.543900   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:18.543962   50613 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-253253 san=[192.168.39.159 192.168.39.159 localhost 127.0.0.1 minikube embed-certs-253253]
	I1108 00:13:18.707824   50613 provision.go:172] copyRemoteCerts
	I1108 00:13:18.707880   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:18.707905   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.710820   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711181   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.711208   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.711642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.711815   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.711973   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:18.803200   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:18.827267   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:13:18.850710   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:18.876752   50613 provision.go:86] duration metric: configureAuth took 339.103407ms
	I1108 00:13:18.876781   50613 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:18.876987   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:18.877075   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.879751   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880121   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.880149   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880331   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.880501   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880649   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880772   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.880929   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.881240   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.881257   50613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:19.199987   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:19.200012   50613 machine.go:91] provisioned docker machine in 931.846262ms
	I1108 00:13:19.200023   50613 start.go:300] post-start starting for "embed-certs-253253" (driver="kvm2")
	I1108 00:13:19.200035   50613 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:19.200057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.200377   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:19.200409   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.203230   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203610   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.203644   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203771   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.203963   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.204118   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.204231   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.297991   50613 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:19.303630   50613 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:19.303655   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:19.303721   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:19.303831   50613 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:19.303956   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:19.315605   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:19.339647   50613 start.go:303] post-start completed in 139.611237ms
	I1108 00:13:19.339665   50613 fix.go:56] fixHost completed within 19.805611247s
	I1108 00:13:19.339687   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.342291   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342623   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.342648   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342838   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.343019   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343147   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343323   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.343483   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:19.343856   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:19.343868   50613 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:19.465645   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402399.415738784
	
	I1108 00:13:19.465670   50613 fix.go:206] guest clock: 1699402399.415738784
	I1108 00:13:19.465681   50613 fix.go:219] Guest: 2023-11-08 00:13:19.415738784 +0000 UTC Remote: 2023-11-08 00:13:19.339668655 +0000 UTC m=+237.442917453 (delta=76.070129ms)
	I1108 00:13:19.465704   50613 fix.go:190] guest clock delta is within tolerance: 76.070129ms
	I1108 00:13:19.465710   50613 start.go:83] releasing machines lock for "embed-certs-253253", held for 19.931686858s
	I1108 00:13:19.465738   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.465996   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:19.468862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469185   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.469223   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469365   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.469898   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470091   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470174   50613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:19.470215   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.470300   50613 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:19.470321   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.473140   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473517   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473562   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473594   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473612   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473777   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473843   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474004   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474007   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474153   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.474192   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474344   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.565638   50613 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:19.591686   50613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:19.747192   50613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:19.755053   50613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:19.755134   50613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:19.774522   50613 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:19.774551   50613 start.go:472] detecting cgroup driver to use...
	I1108 00:13:19.774652   50613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:19.795492   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:19.809888   50613 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:19.809958   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:19.823108   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:19.835588   50613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:19.940017   50613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:20.075405   50613 docker.go:219] disabling docker service ...
	I1108 00:13:20.075460   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:20.090949   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:20.103551   50613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:20.226887   50613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:20.352088   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:20.367626   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:20.388084   50613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:20.388153   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.398506   50613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:20.398573   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.408335   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.417991   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.427599   50613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:20.439537   50613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:20.450914   50613 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:20.450972   50613 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:20.464456   50613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:20.475133   50613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:20.586162   50613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:20.799540   50613 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:20.799615   50613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:20.808503   50613 start.go:540] Will wait 60s for crictl version
	I1108 00:13:20.808551   50613 ssh_runner.go:195] Run: which crictl
	I1108 00:13:20.812371   50613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:20.853073   50613 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:20.853166   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.904737   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.958281   50613 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:20.959792   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:20.962399   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.962740   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:20.962775   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.963037   50613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:20.967403   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.980199   50613 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:20.980261   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:21.024679   50613 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:21.024757   50613 ssh_runner.go:195] Run: which lz4
	I1108 00:13:21.028861   50613 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:21.032736   50613 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:21.032762   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:19.898602   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1108 00:13:19.898655   50505 cache_images.go:123] Successfully loaded all cached images
	I1108 00:13:19.898663   50505 cache_images.go:92] LoadImages completed in 18.919280882s
	I1108 00:13:19.898742   50505 ssh_runner.go:195] Run: crio config
	I1108 00:13:19.970909   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:19.970936   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:19.970958   50505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:19.970986   50505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320390 NodeName:no-preload-320390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:19.971171   50505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320390"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:19.971273   50505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-320390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:19.971347   50505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:19.984469   50505 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:19.984551   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:19.995491   50505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1108 00:13:20.013609   50505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:20.031507   50505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1108 00:13:20.051978   50505 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:20.057139   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.071438   50505 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390 for IP: 192.168.61.176
	I1108 00:13:20.071471   50505 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:20.071635   50505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:20.071691   50505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:20.071782   50505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.key
	I1108 00:13:20.071848   50505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key.492ad1cf
	I1108 00:13:20.071899   50505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key
	I1108 00:13:20.072026   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:20.072064   50505 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:20.072080   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:20.072130   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:20.072167   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:20.072205   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:20.072260   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:20.073092   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:20.099422   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:20.126257   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:20.153126   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:20.184849   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:20.215515   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:20.247686   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:20.277712   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:20.304438   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:20.330321   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:20.361411   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:20.390456   50505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:20.410634   50505 ssh_runner.go:195] Run: openssl version
	I1108 00:13:20.418597   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:20.431853   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438127   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438271   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.445644   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:20.456959   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:20.466413   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472311   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472365   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.477965   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:20.487454   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:20.496731   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502531   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502591   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.509683   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:20.520960   50505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:20.525545   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:20.531367   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:20.537422   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:20.543607   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:20.548942   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:20.554419   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:20.559633   50505 kubeadm.go:404] StartCluster: {Name:no-preload-320390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:20.559719   50505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:20.559766   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:20.603718   50505 cri.go:89] found id: ""
	I1108 00:13:20.603795   50505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:20.613389   50505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:20.613418   50505 kubeadm.go:636] restartCluster start
	I1108 00:13:20.613476   50505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:20.622276   50505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.623645   50505 kubeconfig.go:92] found "no-preload-320390" server: "https://192.168.61.176:8443"
	I1108 00:13:20.626874   50505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:20.638188   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.638238   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.649536   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.649553   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.649610   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.660145   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.160858   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.160936   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.174163   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.660441   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.660526   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.675795   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.160281   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.160358   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.175777   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.660249   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.660328   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.675747   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.160280   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.160360   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.174686   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.661260   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.661343   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.675936   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.160440   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.160558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.174501   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.180066   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180534   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.180492   51635 retry.go:31] will retry after 320.996627ms: waiting for machine to come up
	I1108 00:13:21.503202   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503750   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.503689   51635 retry.go:31] will retry after 431.944242ms: waiting for machine to come up
	I1108 00:13:21.937564   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938025   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.937972   51635 retry.go:31] will retry after 592.354358ms: waiting for machine to come up
	I1108 00:13:22.531850   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:22.532272   51635 retry.go:31] will retry after 589.753727ms: waiting for machine to come up
	I1108 00:13:23.124275   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124784   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.124746   51635 retry.go:31] will retry after 596.910282ms: waiting for machine to come up
	I1108 00:13:23.722967   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723389   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723419   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.723349   51635 retry.go:31] will retry after 793.320391ms: waiting for machine to come up
	I1108 00:13:24.518525   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518953   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518985   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:24.518914   51635 retry.go:31] will retry after 1.247294281s: waiting for machine to come up
	I1108 00:13:25.768137   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768634   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:25.768541   51635 retry.go:31] will retry after 1.468389149s: waiting for machine to come up
	I1108 00:13:22.802292   50613 crio.go:444] Took 1.773480 seconds to copy over tarball
	I1108 00:13:22.802374   50613 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:25.811996   50613 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009592787s)
	I1108 00:13:25.812027   50613 crio.go:451] Took 3.009706 seconds to extract the tarball
	I1108 00:13:25.812036   50613 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:25.852011   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:25.903032   50613 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:25.903055   50613 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:25.903160   50613 ssh_runner.go:195] Run: crio config
	I1108 00:13:25.964562   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:25.964585   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:25.964601   50613 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:25.964618   50613 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-253253 NodeName:embed-certs-253253 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:25.964768   50613 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-253253"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:25.964869   50613 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-253253 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:25.964931   50613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:25.973956   50613 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:25.974031   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:25.982070   50613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 00:13:26.001066   50613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:26.020258   50613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1108 00:13:26.039418   50613 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:26.043133   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:26.055865   50613 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253 for IP: 192.168.39.159
	I1108 00:13:26.055902   50613 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:26.056069   50613 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:26.056268   50613 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:26.056374   50613 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/client.key
	I1108 00:13:26.128533   50613 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key.b15c5797
	I1108 00:13:26.128666   50613 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key
	I1108 00:13:26.128842   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:26.128884   50613 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:26.128895   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:26.128930   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:26.128953   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:26.128975   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:26.129016   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:26.129621   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:26.153776   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:26.179006   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:26.202199   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:26.225241   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:26.247745   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:26.270546   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:26.297075   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:26.320835   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:26.344068   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:26.367085   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:26.391491   50613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:26.408055   50613 ssh_runner.go:195] Run: openssl version
	I1108 00:13:26.413824   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:26.423666   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428281   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428332   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.433901   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:26.443832   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:26.453722   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458290   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458341   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.464035   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:26.473908   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:26.483600   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488053   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488113   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.493571   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:26.503466   50613 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:26.508047   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:26.514165   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:26.520278   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:26.526421   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:26.532388   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:26.538323   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:26.544215   50613 kubeadm.go:404] StartCluster: {Name:embed-certs-253253 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:26.544287   50613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:26.544330   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:26.586501   50613 cri.go:89] found id: ""
	I1108 00:13:26.586578   50613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:26.596647   50613 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:26.596676   50613 kubeadm.go:636] restartCluster start
	I1108 00:13:26.596734   50613 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:26.605901   50613 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.607305   50613 kubeconfig.go:92] found "embed-certs-253253" server: "https://192.168.39.159:8443"
	I1108 00:13:26.610434   50613 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:26.619238   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.619291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.630724   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.630746   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.630787   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.641997   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.660263   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.660349   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.675197   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.160678   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.160774   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.172593   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.660613   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.660696   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.672242   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.160884   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.160978   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.174734   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.660269   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.660337   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.671721   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.160250   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.160344   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.171104   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.660667   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.660729   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.671899   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.160408   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.160471   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.170733   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.660264   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.660338   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.671482   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.161084   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.161163   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.172174   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.238049   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238487   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238518   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:27.238428   51635 retry.go:31] will retry after 1.602246301s: waiting for machine to come up
	I1108 00:13:28.842785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843235   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843259   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:28.843188   51635 retry.go:31] will retry after 2.218327688s: waiting for machine to come up
	I1108 00:13:27.142567   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.242647   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.256767   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.642212   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.642306   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.654185   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.142751   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.142832   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.154141   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.642738   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.642817   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.654476   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.143085   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.143168   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.154553   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.642422   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.642499   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.658048   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.142497   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.142568   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.153710   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.642216   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.642291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.658036   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.142547   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.142634   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.159124   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.642720   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.642810   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.654593   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.660882   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.660944   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.675528   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.161058   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.161121   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.171493   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.638722   50505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:30.638762   50505 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:30.638776   50505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:30.638825   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:30.677982   50505 cri.go:89] found id: ""
	I1108 00:13:30.678064   50505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:30.693650   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:30.702679   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:30.702757   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711179   50505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711212   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:30.843638   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:31.970868   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.127188218s)
	I1108 00:13:31.970904   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.167903   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.242076   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.324914   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:32.325001   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.342576   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.861296   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.360958   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.861308   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:31.062973   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063465   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:31.063370   51635 retry.go:31] will retry after 2.935881965s: waiting for machine to come up
	I1108 00:13:34.002009   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002456   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:34.002385   51635 retry.go:31] will retry after 2.918632194s: waiting for machine to come up
	I1108 00:13:32.142573   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.142668   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.156513   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:32.643129   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.643203   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.654790   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.143023   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.143114   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.159475   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.642631   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.642728   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.658632   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.142142   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.142218   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.158375   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.642356   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.642437   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.657692   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.142180   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.142276   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.157616   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.642121   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.642194   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.656642   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.142162   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:36.142270   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:36.153340   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.619909   50613 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:36.619941   50613 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:36.619958   50613 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:36.620035   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:36.656935   50613 cri.go:89] found id: ""
	I1108 00:13:36.657008   50613 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:36.671784   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:36.680073   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:36.680120   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688560   50613 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688575   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:36.802484   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:34.361558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.860720   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.881793   50505 api_server.go:72] duration metric: took 2.55688905s to wait for apiserver process to appear ...
	I1108 00:13:34.881823   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:34.881843   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.396447   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.396488   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.396503   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.471135   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.471165   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.971845   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.977126   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:38.977163   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.472030   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.477778   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:39.477810   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.971333   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.977224   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:13:39.987415   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:39.987446   50505 api_server.go:131] duration metric: took 5.10561478s to wait for apiserver health ...
	I1108 00:13:39.987456   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:39.987465   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:39.989270   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:36.922427   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922874   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922916   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:36.922824   51635 retry.go:31] will retry after 3.960656744s: waiting for machine to come up
	I1108 00:13:40.886022   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Found IP for machine: 192.168.72.116
	I1108 00:13:40.886591   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has current primary IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886601   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserving static IP address...
	I1108 00:13:40.886974   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.887012   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | skip adding static IP to network mk-default-k8s-diff-port-039263 - found existing host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"}
	I1108 00:13:40.887037   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Getting to WaitForSSH function...
	I1108 00:13:40.887058   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserved static IP address: 192.168.72.116
	I1108 00:13:40.887072   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for SSH to be available...
	I1108 00:13:40.889373   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889771   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.889803   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889991   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH client type: external
	I1108 00:13:40.890018   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa (-rw-------)
	I1108 00:13:40.890054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:40.890068   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | About to run SSH command:
	I1108 00:13:40.890082   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | exit 0
	I1108 00:13:37.573684   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.781978   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.863424   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.935306   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:37.935377   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:37.947059   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.458806   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.959076   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.459045   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.959244   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.458249   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.480623   50613 api_server.go:72] duration metric: took 2.545315304s to wait for apiserver process to appear ...
	I1108 00:13:40.480650   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:40.480668   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:42.285976   50022 start.go:369] acquired machines lock for "old-k8s-version-590541" in 56.809842177s
	I1108 00:13:42.286028   50022 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:42.286039   50022 fix.go:54] fixHost starting: 
	I1108 00:13:42.286455   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:42.286492   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:42.305869   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I1108 00:13:42.306363   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:42.306845   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:13:42.306871   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:42.307221   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:42.307548   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:13:42.307740   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:13:42.309513   50022 fix.go:102] recreateIfNeeded on old-k8s-version-590541: state=Stopped err=<nil>
	I1108 00:13:42.309539   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	W1108 00:13:42.309706   50022 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:42.311819   50022 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-590541" ...
	I1108 00:13:40.997357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:40.997688   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetConfigRaw
	I1108 00:13:40.998394   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.001148   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001578   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.001612   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001940   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:13:41.002174   51228 machine.go:88] provisioning docker machine ...
	I1108 00:13:41.002197   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:41.002421   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002577   51228 buildroot.go:166] provisioning hostname "default-k8s-diff-port-039263"
	I1108 00:13:41.002600   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002800   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.005167   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005544   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.005584   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005873   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.006029   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006291   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.006425   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.007012   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.007036   51228 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-039263 && echo "default-k8s-diff-port-039263" | sudo tee /etc/hostname
	I1108 00:13:41.168664   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039263
	
	I1108 00:13:41.168698   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.171709   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172090   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.172132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172266   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.172457   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172650   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172867   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.173130   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.173626   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.173654   51228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-039263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-039263/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-039263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:41.324510   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:41.324539   51228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:41.324586   51228 buildroot.go:174] setting up certificates
	I1108 00:13:41.324598   51228 provision.go:83] configureAuth start
	I1108 00:13:41.324610   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.324933   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.327797   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.328213   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.330558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.330921   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.330955   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.331062   51228 provision.go:138] copyHostCerts
	I1108 00:13:41.331128   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:41.331150   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:41.331222   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:41.331337   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:41.331355   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:41.331387   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:41.331467   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:41.331479   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:41.331506   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:41.331592   51228 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-039263 san=[192.168.72.116 192.168.72.116 localhost 127.0.0.1 minikube default-k8s-diff-port-039263]
	I1108 00:13:41.452051   51228 provision.go:172] copyRemoteCerts
	I1108 00:13:41.452123   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:41.452156   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.454755   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455056   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.455089   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455288   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.455512   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.455704   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.455831   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:41.554387   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:41.586357   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:41.616703   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1108 00:13:41.646461   51228 provision.go:86] duration metric: configureAuth took 321.850044ms
	I1108 00:13:41.646489   51228 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:41.646730   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:41.646825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.650386   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.650813   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.650856   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.651031   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.651232   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651422   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.651763   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.652302   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.652325   51228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:42.006373   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:42.006401   51228 machine.go:91] provisioned docker machine in 1.004212938s
	I1108 00:13:42.006414   51228 start.go:300] post-start starting for "default-k8s-diff-port-039263" (driver="kvm2")
	I1108 00:13:42.006426   51228 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:42.006445   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.006785   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:42.006811   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.009619   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.009950   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.009986   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.010127   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.010344   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.010515   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.010673   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.106366   51228 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:42.110676   51228 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:42.110701   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:42.110770   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:42.110869   51228 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:42.110972   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:42.121223   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:42.146966   51228 start.go:303] post-start completed in 140.536976ms
	I1108 00:13:42.146996   51228 fix.go:56] fixHost completed within 22.681133015s
	I1108 00:13:42.147019   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.149705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.150165   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150406   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.150606   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150818   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150988   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.151156   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:42.151511   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:42.151523   51228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:42.285789   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402422.233004693
	
	I1108 00:13:42.285815   51228 fix.go:206] guest clock: 1699402422.233004693
	I1108 00:13:42.285823   51228 fix.go:219] Guest: 2023-11-08 00:13:42.233004693 +0000 UTC Remote: 2023-11-08 00:13:42.146999966 +0000 UTC m=+101.273648910 (delta=86.004727ms)
	I1108 00:13:42.285869   51228 fix.go:190] guest clock delta is within tolerance: 86.004727ms
	I1108 00:13:42.285877   51228 start.go:83] releasing machines lock for "default-k8s-diff-port-039263", held for 22.820045752s
	I1108 00:13:42.285913   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.286161   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:42.288711   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289095   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.289133   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289241   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.289864   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290109   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290209   51228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:42.290261   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.290323   51228 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:42.290345   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.293063   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293219   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293451   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293483   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293570   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293599   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.293878   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.293887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.294075   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.294085   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294234   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294280   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.294336   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.386493   51228 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:42.411009   51228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:42.558200   51228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:42.566040   51228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:42.566116   51228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:42.584775   51228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:42.584800   51228 start.go:472] detecting cgroup driver to use...
	I1108 00:13:42.584872   51228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:42.598720   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:42.612836   51228 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:42.612927   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:42.627474   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:42.641670   51228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:42.753616   51228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:42.888608   51228 docker.go:219] disabling docker service ...
	I1108 00:13:42.888680   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:42.903298   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:42.920184   51228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:43.054621   51228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:43.181836   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:43.198481   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:43.219759   51228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:43.219827   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.231137   51228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:43.231221   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.242206   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.253506   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.264311   51228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:43.276451   51228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:43.288448   51228 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:43.288522   51228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:43.305986   51228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:43.318366   51228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:43.479739   51228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:43.705223   51228 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:43.705302   51228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:43.711842   51228 start.go:540] Will wait 60s for crictl version
	I1108 00:13:43.711915   51228 ssh_runner.go:195] Run: which crictl
	I1108 00:13:43.717688   51228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:43.762492   51228 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:43.762651   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.814548   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.870144   51228 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:39.990811   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:40.020162   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:40.064758   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:40.081652   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:13:40.081705   50505 system_pods.go:61] "coredns-5dd5756b68-lhnz5" [936252ee-4f00-49e2-96e4-7c4f4a4ca378] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:40.081725   50505 system_pods.go:61] "etcd-no-preload-320390" [95e08672-dc80-4aa6-bd4a-e5f77bfc4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:40.081738   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [3261561e-b7d5-4302-8e0b-301d00407e8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:40.081748   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [b87602fd-b248-4529-9116-1851a4284bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:40.081763   50505 system_pods.go:61] "kube-proxy-c4mbm" [33806b69-57c0-4807-849b-b6a4f8a5db12] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:40.081777   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [4f7b4160-b99e-4f76-9b12-c5b1849c91b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:40.081791   50505 system_pods.go:61] "metrics-server-57f55c9bc5-th89c" [06aea7c0-065b-44a4-8d53-432f5722e937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:40.081810   50505 system_pods.go:61] "storage-provisioner" [c7b0810b-1ba7-4d56-ad97-3f04d771960d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:40.081823   50505 system_pods.go:74] duration metric: took 17.024016ms to wait for pod list to return data ...
	I1108 00:13:40.081836   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:40.093789   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:40.093827   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:40.093841   50505 node_conditions.go:105] duration metric: took 11.998569ms to run NodePressure ...
	I1108 00:13:40.093863   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:40.340962   50505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346004   50505 kubeadm.go:787] kubelet initialised
	I1108 00:13:40.346032   50505 kubeadm.go:788] duration metric: took 5.042344ms waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346044   50505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:40.355648   50505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:42.377985   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:42.313355   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Start
	I1108 00:13:42.313526   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring networks are active...
	I1108 00:13:42.314176   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network default is active
	I1108 00:13:42.314638   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network mk-old-k8s-version-590541 is active
	I1108 00:13:42.315060   50022 main.go:141] libmachine: (old-k8s-version-590541) Getting domain xml...
	I1108 00:13:42.315833   50022 main.go:141] libmachine: (old-k8s-version-590541) Creating domain...
	I1108 00:13:43.739499   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting to get IP...
	I1108 00:13:43.740647   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.741195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.741259   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.741155   51822 retry.go:31] will retry after 195.621332ms: waiting for machine to come up
	I1108 00:13:43.938557   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.939127   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.939268   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.939200   51822 retry.go:31] will retry after 278.651736ms: waiting for machine to come up
	I1108 00:13:44.219831   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.220473   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.220500   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.220418   51822 retry.go:31] will retry after 384.748872ms: waiting for machine to come up
	I1108 00:13:44.607110   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.607665   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.607696   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.607591   51822 retry.go:31] will retry after 401.60668ms: waiting for machine to come up
	I1108 00:13:43.871596   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:43.874814   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875307   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:43.875357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875575   51228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:43.880324   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:43.895271   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:43.895331   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:43.943120   51228 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:43.943238   51228 ssh_runner.go:195] Run: which lz4
	I1108 00:13:43.947723   51228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:43.952328   51228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:43.952365   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:45.857547   51228 crio.go:444] Took 1.909852 seconds to copy over tarball
	I1108 00:13:45.857623   51228 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:45.314087   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.314125   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.314144   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.333352   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.333384   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.833959   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.852530   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:45.852613   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.333996   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.346680   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:46.346714   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.833955   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.841287   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:13:46.853271   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:46.853299   50613 api_server.go:131] duration metric: took 6.372641273s to wait for apiserver health ...
	I1108 00:13:46.853310   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:46.853318   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:46.855336   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:46.856955   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:46.892049   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:46.933039   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:44.399678   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.879110   50505 pod_ready.go:92] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.879142   50505 pod_ready.go:81] duration metric: took 5.523463579s waiting for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.879154   50505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885356   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.885377   50505 pod_ready.go:81] duration metric: took 6.21581ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885385   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:47.914308   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.011074   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.011525   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.011560   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.011500   51822 retry.go:31] will retry after 708.154492ms: waiting for machine to come up
	I1108 00:13:45.720911   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.721383   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.721418   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.721294   51822 retry.go:31] will retry after 746.365542ms: waiting for machine to come up
	I1108 00:13:46.469031   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:46.469615   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:46.469641   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:46.469556   51822 retry.go:31] will retry after 924.305758ms: waiting for machine to come up
	I1108 00:13:47.395756   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:47.396297   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:47.396323   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:47.396241   51822 retry.go:31] will retry after 1.343866256s: waiting for machine to come up
	I1108 00:13:48.741427   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:48.741851   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:48.741883   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:48.741816   51822 retry.go:31] will retry after 1.388849147s: waiting for machine to come up
	I1108 00:13:49.625178   51228 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.76753046s)
	I1108 00:13:49.625229   51228 crio.go:451] Took 3.767633 seconds to extract the tarball
	I1108 00:13:49.625242   51228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:49.670263   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:49.727650   51228 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:49.727677   51228 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:49.727747   51228 ssh_runner.go:195] Run: crio config
	I1108 00:13:49.811565   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:13:49.811592   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:49.811615   51228 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:49.811639   51228 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.116 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-039263 NodeName:default-k8s-diff-port-039263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:49.811812   51228 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-039263"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:49.811906   51228 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-039263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1108 00:13:49.811984   51228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:49.822961   51228 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:49.823027   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:49.832632   51228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1108 00:13:49.850812   51228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:49.869345   51228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1108 00:13:49.887645   51228 ssh_runner.go:195] Run: grep 192.168.72.116	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:49.892538   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:49.907166   51228 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263 for IP: 192.168.72.116
	I1108 00:13:49.907205   51228 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:49.907374   51228 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:49.907425   51228 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:49.907523   51228 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.key
	I1108 00:13:49.907601   51228 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key.b2cbdf93
	I1108 00:13:49.907658   51228 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key
	I1108 00:13:49.907807   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:49.907851   51228 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:49.907872   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:49.907915   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:49.907951   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:49.907988   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:49.908046   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:49.908955   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:49.938941   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:49.964654   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:49.991354   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:50.018895   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:50.048330   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:50.076095   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:50.103752   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:50.130140   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:50.156862   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:50.181994   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:50.208069   51228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:50.226069   51228 ssh_runner.go:195] Run: openssl version
	I1108 00:13:50.232941   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:50.246981   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.252981   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.253059   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.260626   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:50.274135   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:50.285611   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290761   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290837   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.297508   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:50.308772   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:50.320122   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326021   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326083   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.332534   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:50.344381   51228 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:50.350040   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:50.356282   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:50.362850   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:50.378237   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:50.385607   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:50.392272   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:50.399220   51228 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port
-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:50.399304   51228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:50.399358   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:50.449693   51228 cri.go:89] found id: ""
	I1108 00:13:50.449770   51228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:50.460225   51228 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:50.460256   51228 kubeadm.go:636] restartCluster start
	I1108 00:13:50.460313   51228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:50.469777   51228 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.470973   51228 kubeconfig.go:92] found "default-k8s-diff-port-039263" server: "https://192.168.72.116:8444"
	I1108 00:13:50.473778   51228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:50.482964   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.483022   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.495100   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.495123   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.495186   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.508735   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:46.949012   50613 system_pods.go:59] 9 kube-system pods found
	I1108 00:13:46.950252   50613 system_pods.go:61] "coredns-5dd5756b68-7djdr" [a1459bf3-703b-418a-bc22-c98e285c6e31] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950302   50613 system_pods.go:61] "coredns-5dd5756b68-8qjbd" [fa7b05fd-725b-4c9c-815e-360f2bef8ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950336   50613 system_pods.go:61] "etcd-embed-certs-253253" [2631ed7d-3af4-4848-bbb8-c77038f8a1f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:46.950369   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [80b3e8da-6474-4fd8-bb86-0d9cc70086ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:46.950391   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [ee19def3-043a-4832-8153-52aaf8b4748a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:46.950407   50613 system_pods.go:61] "kube-proxy-rsgkf" [509d66e3-b034-4dcd-a16e-b2f93b9efa6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:46.950482   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [ef7bb9c3-98c8-45d8-8f54-852fb639b408] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:46.950497   50613 system_pods.go:61] "metrics-server-57f55c9bc5-s7ldx" [61cd423c-edbd-4d0c-87e8-1ac8e52c70e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:46.950507   50613 system_pods.go:61] "storage-provisioner" [d6157b7c-6b52-4ca8-a935-d68a0291305f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:46.950519   50613 system_pods.go:74] duration metric: took 17.457991ms to wait for pod list to return data ...
	I1108 00:13:46.950532   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:46.956062   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:46.956142   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:46.956165   50613 node_conditions.go:105] duration metric: took 5.622732ms to run NodePressure ...
	I1108 00:13:46.956193   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:47.272695   50613 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280001   50613 kubeadm.go:787] kubelet initialised
	I1108 00:13:47.280031   50613 kubeadm.go:788] duration metric: took 7.30064ms waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280041   50613 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:47.290043   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.378703   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:50.370740   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:51.912802   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.912845   50505 pod_ready.go:81] duration metric: took 6.027451924s waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.912861   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920043   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.920073   50505 pod_ready.go:81] duration metric: took 7.195906ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920085   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927863   50505 pod_ready.go:92] pod "kube-proxy-c4mbm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.927887   50505 pod_ready.go:81] duration metric: took 7.793258ms waiting for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927900   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934444   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.934470   50505 pod_ready.go:81] duration metric: took 6.560509ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934481   50505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.131947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:50.132491   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:50.132526   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:50.132397   51822 retry.go:31] will retry after 1.410573405s: waiting for machine to come up
	I1108 00:13:51.544674   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:51.545073   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:51.545099   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:51.545025   51822 retry.go:31] will retry after 1.773802671s: waiting for machine to come up
	I1108 00:13:53.320381   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:53.320863   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:53.320893   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:53.320805   51822 retry.go:31] will retry after 3.166868207s: waiting for machine to come up
	I1108 00:13:51.009734   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.009825   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.026052   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:51.509697   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.509786   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.527840   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.009557   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.009656   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.025049   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.509606   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.509707   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.526174   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.008803   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.008954   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.022472   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.508900   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.509005   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.525225   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.009884   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.009974   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.022171   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.509280   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.509376   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.522041   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.009670   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.009752   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.023035   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.509640   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.509717   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.526730   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.836317   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:53.332094   50613 pod_ready.go:92] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.332121   50613 pod_ready.go:81] duration metric: took 6.042047013s waiting for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.332133   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337858   50613 pod_ready.go:92] pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.337882   50613 pod_ready.go:81] duration metric: took 5.740229ms waiting for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337894   50613 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:55.356131   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:54.323357   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.328874   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.820773   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.490058   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:56.490553   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:56.490590   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:56.490511   51822 retry.go:31] will retry after 3.18441493s: waiting for machine to come up
	I1108 00:13:56.009549   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.009646   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.024559   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:56.508912   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.509015   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.521861   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.009408   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.009479   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.022156   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.509466   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.509554   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.522766   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.008909   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.009026   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.021521   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.509050   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.509134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.521387   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.008889   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.008975   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.021781   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.509489   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.509575   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.521581   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.009117   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:14:00.009196   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:00.022210   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.483934   51228 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:00.483990   51228 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:00.484004   51228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:00.484066   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:00.528120   51228 cri.go:89] found id: ""
	I1108 00:14:00.528178   51228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:00.544876   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:00.553827   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:00.553883   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562695   51228 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562721   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:00.676044   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:57.856242   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.855444   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.855471   50613 pod_ready.go:81] duration metric: took 5.517568786s waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.855479   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860431   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.860453   50613 pod_ready.go:81] duration metric: took 4.966273ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860464   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865854   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.865874   50613 pod_ready.go:81] duration metric: took 5.40177ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865914   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870805   50613 pod_ready.go:92] pod "kube-proxy-rsgkf" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.870826   50613 pod_ready.go:81] duration metric: took 4.898411ms waiting for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870835   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958009   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.958034   50613 pod_ready.go:81] duration metric: took 87.190501ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958052   50613 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:01.265674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:00.823696   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:03.322129   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:59.678086   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:59.678579   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:59.678598   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:59.678528   51822 retry.go:31] will retry after 4.30352873s: waiting for machine to come up
	I1108 00:14:03.983994   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984437   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has current primary IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984474   50022 main.go:141] libmachine: (old-k8s-version-590541) Found IP for machine: 192.168.50.49
	I1108 00:14:03.984489   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserving static IP address...
	I1108 00:14:03.984947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.984981   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | skip adding static IP to network mk-old-k8s-version-590541 - found existing host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"}
	I1108 00:14:03.985000   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserved static IP address: 192.168.50.49
	I1108 00:14:03.985020   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting for SSH to be available...
	I1108 00:14:03.985034   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Getting to WaitForSSH function...
	I1108 00:14:03.987671   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988083   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.988116   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988388   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH client type: external
	I1108 00:14:03.988424   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa (-rw-------)
	I1108 00:14:03.988461   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:14:03.988481   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | About to run SSH command:
	I1108 00:14:03.988496   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | exit 0
	I1108 00:14:04.080867   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | SSH cmd err, output: <nil>: 
	I1108 00:14:04.081275   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetConfigRaw
	I1108 00:14:04.081955   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.085061   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085512   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.085554   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085942   50022 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/config.json ...
	I1108 00:14:04.086165   50022 machine.go:88] provisioning docker machine ...
	I1108 00:14:04.086188   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:04.086417   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086612   50022 buildroot.go:166] provisioning hostname "old-k8s-version-590541"
	I1108 00:14:04.086634   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086822   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.089431   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.089808   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.089838   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.090007   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.090201   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090362   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090535   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.090686   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.090991   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.091002   50022 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-590541 && echo "old-k8s-version-590541" | sudo tee /etc/hostname
	I1108 00:14:04.228526   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-590541
	
	I1108 00:14:04.228561   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.232020   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232390   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.232454   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232743   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.232930   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233109   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233264   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.233430   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.233786   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.233812   50022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-590541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-590541/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-590541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:14:04.370396   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:14:04.370424   50022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:14:04.370469   50022 buildroot.go:174] setting up certificates
	I1108 00:14:04.370487   50022 provision.go:83] configureAuth start
	I1108 00:14:04.370505   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.370779   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.373683   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374081   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.374111   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374240   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.377048   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377441   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.377469   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377596   50022 provision.go:138] copyHostCerts
	I1108 00:14:04.377658   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:14:04.377678   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:14:04.377748   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:14:04.377855   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:14:04.377867   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:14:04.377893   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:14:04.377965   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:14:04.377979   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:14:04.378005   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:14:04.378064   50022 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-590541 san=[192.168.50.49 192.168.50.49 localhost 127.0.0.1 minikube old-k8s-version-590541]
	I1108 00:14:04.534682   50022 provision.go:172] copyRemoteCerts
	I1108 00:14:04.534750   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:14:04.534778   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.538002   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538379   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.538408   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538639   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.538789   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.538975   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.539146   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:04.632308   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:14:01.961492   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.285410864s)
	I1108 00:14:01.961529   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.165604   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.235655   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.352126   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:02.352212   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.370538   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.884696   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.384139   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.884529   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.384134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.884877   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.913244   51228 api_server.go:72] duration metric: took 2.56112461s to wait for apiserver process to appear ...
	I1108 00:14:04.913273   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:04.913295   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:04.657542   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:14:04.682815   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:14:04.709405   50022 provision.go:86] duration metric: configureAuth took 338.902281ms
	I1108 00:14:04.709439   50022 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:14:04.709651   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:14:04.709741   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.713141   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713520   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.713561   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713718   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.713923   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714108   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714259   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.714497   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.714885   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.714905   50022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:14:05.055346   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:14:05.055427   50022 machine.go:91] provisioned docker machine in 969.247821ms
	I1108 00:14:05.055446   50022 start.go:300] post-start starting for "old-k8s-version-590541" (driver="kvm2")
	I1108 00:14:05.055459   50022 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:14:05.055493   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.055841   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:14:05.055895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.058959   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059423   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.059457   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059601   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.059775   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.059895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.060042   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.151543   50022 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:14:05.155876   50022 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:14:05.155902   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:14:05.155969   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:14:05.156056   50022 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:14:05.156229   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:14:05.165742   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:05.190622   50022 start.go:303] post-start completed in 135.159333ms
	I1108 00:14:05.190648   50022 fix.go:56] fixHost completed within 22.904612851s
	I1108 00:14:05.190673   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.193716   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194165   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.194195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194480   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.194725   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.194929   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.195106   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.195260   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:05.195755   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:05.195778   50022 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:14:05.326443   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402445.269657345
	
	I1108 00:14:05.326467   50022 fix.go:206] guest clock: 1699402445.269657345
	I1108 00:14:05.326476   50022 fix.go:219] Guest: 2023-11-08 00:14:05.269657345 +0000 UTC Remote: 2023-11-08 00:14:05.190652611 +0000 UTC m=+370.589908297 (delta=79.004734ms)
	I1108 00:14:05.326524   50022 fix.go:190] guest clock delta is within tolerance: 79.004734ms
	I1108 00:14:05.326531   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 23.040527062s
	I1108 00:14:05.326558   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.326845   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:05.329775   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330225   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.330254   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330447   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331102   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331338   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331424   50022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:14:05.331467   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.331584   50022 ssh_runner.go:195] Run: cat /version.json
	I1108 00:14:05.331610   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.334586   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.334817   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335125   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335182   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335225   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335307   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335339   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335418   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335536   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335603   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.335774   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335783   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.335906   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.336063   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.423679   50022 ssh_runner.go:195] Run: systemctl --version
	I1108 00:14:05.446956   50022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:14:05.598713   50022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:14:05.605558   50022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:14:05.605641   50022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:14:05.620183   50022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:14:05.620211   50022 start.go:472] detecting cgroup driver to use...
	I1108 00:14:05.620277   50022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:14:05.635981   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:14:05.649637   50022 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:14:05.649699   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:14:05.664232   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:14:05.678205   50022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:14:05.791991   50022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:14:05.925002   50022 docker.go:219] disabling docker service ...
	I1108 00:14:05.925135   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:14:05.939853   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:14:05.955518   50022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:14:06.074872   50022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:14:06.189371   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:14:06.202247   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:14:06.219012   50022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1108 00:14:06.219082   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.229837   50022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:14:06.229911   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.239769   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.248633   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.257717   50022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:14:06.268893   50022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:14:06.277427   50022 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:14:06.277495   50022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:14:06.290771   50022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:14:06.299918   50022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:14:06.421038   50022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:14:06.587544   50022 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:14:06.587624   50022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:14:06.592726   50022 start.go:540] Will wait 60s for crictl version
	I1108 00:14:06.592781   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:06.596695   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:14:06.637642   50022 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:14:06.637733   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.690026   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.740455   50022 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1108 00:14:03.266720   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.764837   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.322160   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:07.329491   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:06.741799   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:06.744301   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744599   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:06.744630   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744861   50022 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1108 00:14:06.749385   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:06.762645   50022 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1108 00:14:06.762732   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:06.804386   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:06.804458   50022 ssh_runner.go:195] Run: which lz4
	I1108 00:14:06.808948   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:14:06.813319   50022 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:14:06.813355   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1108 00:14:08.476578   50022 crio.go:444] Took 1.667668 seconds to copy over tarball
	I1108 00:14:08.476646   50022 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:14:09.078810   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.078843   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.078859   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.140049   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.140083   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.641000   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.647216   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:09.647247   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.140446   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.148995   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:10.149028   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.640719   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.649076   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:14:10.660508   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:14:10.660545   51228 api_server.go:131] duration metric: took 5.747263547s to wait for apiserver health ...
	I1108 00:14:10.660556   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:14:10.660566   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:10.662644   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:10.664069   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:10.682131   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:10.709582   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:10.725779   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:14:10.725840   51228 system_pods.go:61] "coredns-5dd5756b68-rz9t4" [d7b24f41-ed9e-4b07-991b-8587f49d7902] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:14:10.725854   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [f58b5fbb-a565-4d47-8b3d-ea62169dc0fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:14:10.725868   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [d0c3391c-679f-49ad-a6ff-ef62d74a62ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:14:10.725882   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [33f54c9b-cc67-4662-8db9-c735fde4d9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:14:10.725903   51228 system_pods.go:61] "kube-proxy-z7b8g" [079a28b1-dbad-4e62-a9ea-b667206433cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:14:10.725914   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [629f940b-6d2a-4c3c-8a11-2805dc2c04d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:14:10.725927   51228 system_pods.go:61] "metrics-server-57f55c9bc5-nlhpn" [f5d69cb1-4266-45fc-9bab-57053f915aa0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:14:10.725941   51228 system_pods.go:61] "storage-provisioner" [fb6541da-3ed3-4abb-b534-643bb5faf7d3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:14:10.725953   51228 system_pods.go:74] duration metric: took 16.346941ms to wait for pod list to return data ...
	I1108 00:14:10.725965   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:10.730466   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:10.730555   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:10.730574   51228 node_conditions.go:105] duration metric: took 4.602969ms to run NodePressure ...
	I1108 00:14:10.730595   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:07.772448   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:10.267241   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:09.824633   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.829090   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.015104   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.781938   50022 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.305246635s)
	I1108 00:14:11.781979   50022 crio.go:451] Took 3.305377 seconds to extract the tarball
	I1108 00:14:11.781999   50022 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:14:11.837911   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:11.907599   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:11.907634   50022 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:14:11.907702   50022 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.907965   50022 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.907983   50022 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.907966   50022 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.908257   50022 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.908365   50022 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1108 00:14:11.909163   50022 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.909239   50022 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.909251   50022 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.909332   50022 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.909171   50022 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.909397   50022 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.909435   50022 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.909625   50022 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1108 00:14:12.040043   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.042004   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1108 00:14:12.047478   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.051016   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.095045   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.126645   50022 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1108 00:14:12.126718   50022 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.126788   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.133035   50022 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1108 00:14:12.133078   50022 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1108 00:14:12.133120   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.164621   50022 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1108 00:14:12.164686   50022 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.164754   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.182223   50022 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1108 00:14:12.182267   50022 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.182318   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201151   50022 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1108 00:14:12.201196   50022 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.201244   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201255   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.201306   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1108 00:14:12.201305   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.201341   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.203375   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.208529   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.341873   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1108 00:14:12.341901   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1108 00:14:12.341954   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.341960   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1108 00:14:12.356561   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1108 00:14:12.356663   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.361927   50022 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1108 00:14:12.361962   50022 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.362023   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.382770   50022 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1108 00:14:12.382819   50022 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.382864   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.406169   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1108 00:14:12.406213   50022 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1108 00:14:12.406228   50022 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406273   50022 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406313   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.406274   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.863910   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:14.488498   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0: (2.082152502s)
	I1108 00:14:14.488536   50022 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.082234083s)
	I1108 00:14:14.488548   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1108 00:14:14.488571   50022 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1108 00:14:14.488623   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0: (2.082249259s)
	I1108 00:14:14.488666   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1108 00:14:14.488711   50022 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.624766966s)
	I1108 00:14:14.488762   50022 cache_images.go:92] LoadImages completed in 2.581114029s
	W1108 00:14:14.488842   50022 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1108 00:14:14.488915   50022 ssh_runner.go:195] Run: crio config
	I1108 00:14:14.557127   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:14.557155   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:14.557176   50022 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:14:14.557204   50022 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.49 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-590541 NodeName:old-k8s-version-590541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1108 00:14:14.557391   50022 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-590541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-590541
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.49:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:14:14.557508   50022 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-590541 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:14:14.557579   50022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1108 00:14:14.568423   50022 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:14:14.568501   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:14:14.578581   50022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1108 00:14:14.596389   50022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:14:14.613956   50022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1108 00:14:14.631988   50022 ssh_runner.go:195] Run: grep 192.168.50.49	control-plane.minikube.internal$ /etc/hosts
	I1108 00:14:14.636236   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:14.648849   50022 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541 for IP: 192.168.50.49
	I1108 00:14:14.648888   50022 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:14:14.649071   50022 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:14:14.649126   50022 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:14:14.649231   50022 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.key
	I1108 00:14:14.649312   50022 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key.5b7c76e3
	I1108 00:14:14.649375   50022 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key
	I1108 00:14:14.649542   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:14:14.649587   50022 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:14:14.649597   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:14:14.649636   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:14:14.649677   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:14:14.649714   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:14:14.649771   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:11.058474   51228 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064805   51228 kubeadm.go:787] kubelet initialised
	I1108 00:14:11.064852   51228 kubeadm.go:788] duration metric: took 6.346592ms waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064863   51228 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:14:11.073499   51228 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.089759   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089791   51228 pod_ready.go:81] duration metric: took 16.257238ms waiting for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.089803   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089811   51228 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.100580   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100605   51228 pod_ready.go:81] duration metric: took 10.783802ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.100615   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100621   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.113797   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113826   51228 pod_ready.go:81] duration metric: took 13.195367ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.113838   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113847   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.124704   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124736   51228 pod_ready.go:81] duration metric: took 10.87946ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.124750   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124760   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915650   51228 pod_ready.go:92] pod "kube-proxy-z7b8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:11.915674   51228 pod_ready.go:81] duration metric: took 790.904941ms waiting for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915686   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:14.011244   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:12.537889   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.767882   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:16.322840   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.323955   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.650662   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:14:14.682536   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 00:14:14.708618   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:14:14.737947   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 00:14:14.768365   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:14:14.795469   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:14:14.824086   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:14:14.851375   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:14:14.878638   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:14:14.906647   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:14:14.933316   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:14:14.961937   50022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:14:14.980167   50022 ssh_runner.go:195] Run: openssl version
	I1108 00:14:14.986053   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:14:14.996201   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001410   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001490   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.008681   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:14:15.022034   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:14:15.031992   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037854   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037910   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.045107   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:14:15.057464   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:14:15.070137   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075848   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075917   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.083414   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:14:15.094499   50022 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:14:15.099437   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:14:15.105940   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:14:15.112527   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:14:15.118429   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:14:15.124769   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:14:15.130975   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:14:15.136772   50022 kubeadm.go:404] StartCluster: {Name:old-k8s-version-590541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:14:15.136903   50022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:14:15.136952   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:15.184018   50022 cri.go:89] found id: ""
	I1108 00:14:15.184095   50022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:14:15.196900   50022 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:14:15.196924   50022 kubeadm.go:636] restartCluster start
	I1108 00:14:15.196994   50022 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:14:15.208810   50022 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.210399   50022 kubeconfig.go:92] found "old-k8s-version-590541" server: "https://192.168.50.49:8443"
	I1108 00:14:15.214114   50022 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:14:15.223586   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.223644   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.234506   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.234525   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.234565   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.244971   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.745626   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.745698   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.757830   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.246012   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.246090   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.258583   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.745965   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.746045   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.758317   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.245985   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.246087   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.257615   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.745646   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.745715   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.757591   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.245666   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.245773   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.258225   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.745765   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.745842   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.756699   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:19.245946   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.246016   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.258255   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.222461   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.722269   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:18.722291   51228 pod_ready.go:81] duration metric: took 6.806598217s waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:18.722300   51228 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:20.739081   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:17.264976   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.265242   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:21.265825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:20.822592   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.321115   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.745997   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.746135   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.757885   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.245884   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.245988   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.258408   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.745963   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.746035   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.757892   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.246052   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.246133   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.258401   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.745947   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.746040   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.759160   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.246004   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.246075   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.258859   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.745787   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.745889   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.758099   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.245961   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.246068   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.258810   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.745167   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.745248   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.757093   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:24.245690   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.245751   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.258264   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.739380   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.739502   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.766235   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:26.264779   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:25.322215   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:27.322896   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.745944   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.746024   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.759229   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:25.224130   50022 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:25.224188   50022 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:25.224207   50022 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:25.224267   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:25.271348   50022 cri.go:89] found id: ""
	I1108 00:14:25.271418   50022 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:25.287540   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:25.296398   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:25.296452   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305111   50022 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305137   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:25.434385   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.361847   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.561621   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.667973   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.798155   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:26.798240   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:26.822210   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.335493   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.836175   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.336398   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.836400   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.862790   50022 api_server.go:72] duration metric: took 2.064638513s to wait for apiserver process to appear ...
	I1108 00:14:28.862814   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:28.862827   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:26.740013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.740958   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.266931   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:30.765036   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:29.827237   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:32.323375   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.863452   50022 api_server.go:269] stopped: https://192.168.50.49:8443/healthz: Get "https://192.168.50.49:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 00:14:33.863495   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:34.513495   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:34.513530   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:31.240440   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.739764   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.014492   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.020991   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.021019   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:35.514559   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.521451   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.521475   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:36.014620   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:36.021243   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:14:36.029191   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:14:36.029214   50022 api_server.go:131] duration metric: took 7.166394703s to wait for apiserver health ...
	I1108 00:14:36.029225   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:36.029232   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:36.030800   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:32.765436   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:34.825199   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.322438   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:36.032078   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:36.042827   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:36.062239   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:36.070373   50022 system_pods.go:59] 7 kube-system pods found
	I1108 00:14:36.070404   50022 system_pods.go:61] "coredns-5644d7b6d9-cmx8s" [510a3ae2-abff-40f9-8605-7fd6cc5316de] Running
	I1108 00:14:36.070414   50022 system_pods.go:61] "etcd-old-k8s-version-590541" [4597d43f-d424-4591-8a5c-6e4a7d60bb2b] Running
	I1108 00:14:36.070420   50022 system_pods.go:61] "kube-apiserver-old-k8s-version-590541" [353c1157-7cac-4809-91ea-30745ecbc10c] Running
	I1108 00:14:36.070427   50022 system_pods.go:61] "kube-controller-manager-old-k8s-version-590541" [30679f8f-aa28-4349-ada1-97af45c0c065] Running
	I1108 00:14:36.070432   50022 system_pods.go:61] "kube-proxy-r8p96" [21ac95e4-595f-4520-8174-ef5e1334c1be] Running
	I1108 00:14:36.070437   50022 system_pods.go:61] "kube-scheduler-old-k8s-version-590541" [f406d277-d786-417a-9428-8433143db81c] Running
	I1108 00:14:36.070443   50022 system_pods.go:61] "storage-provisioner" [26f85033-bd24-4332-ba8d-1aed49559417] Running
	I1108 00:14:36.070452   50022 system_pods.go:74] duration metric: took 8.188793ms to wait for pod list to return data ...
	I1108 00:14:36.070461   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:36.075209   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:36.075242   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:36.075259   50022 node_conditions.go:105] duration metric: took 4.788324ms to run NodePressure ...
	I1108 00:14:36.075286   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:36.310748   50022 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:36.319886   50022 retry.go:31] will retry after 259.644928ms: kubelet not initialised
	I1108 00:14:36.584728   50022 retry.go:31] will retry after 259.541836ms: kubelet not initialised
	I1108 00:14:36.851013   50022 retry.go:31] will retry after 319.229418ms: kubelet not initialised
	I1108 00:14:37.192544   50022 retry.go:31] will retry after 949.166954ms: kubelet not initialised
	I1108 00:14:38.149087   50022 retry.go:31] will retry after 1.159461481s: kubelet not initialised
	I1108 00:14:39.313777   50022 retry.go:31] will retry after 1.441288405s: kubelet not initialised
	I1108 00:14:36.240206   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:38.240974   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.739451   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.266643   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.267727   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.765636   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.323180   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.323278   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:43.821724   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.762380   50022 retry.go:31] will retry after 2.811416386s: kubelet not initialised
	I1108 00:14:43.579217   50022 retry.go:31] will retry after 4.427599597s: kubelet not initialised
	I1108 00:14:42.739823   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.238841   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:44.266015   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:46.766564   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.822389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:47.822637   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:48.011401   50022 retry.go:31] will retry after 9.583320686s: kubelet not initialised
	I1108 00:14:47.239708   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.739520   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.264876   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.265467   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:50.321858   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:52.823189   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.740005   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:54.239137   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:53.267904   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.767709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.321381   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.600096   50022 retry.go:31] will retry after 8.628668417s: kubelet not initialised
	I1108 00:14:56.242527   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.740775   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.742908   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.263898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.264487   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:59.822276   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.322959   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.744271   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:05.239364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.764787   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.767529   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.821706   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.822611   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:08.822950   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.235557   50022 retry.go:31] will retry after 18.967803661s: kubelet not initialised
	I1108 00:15:07.239957   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.243640   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:07.268913   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.765546   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:10.823397   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:13.320774   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:11.741381   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.239143   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:12.265009   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.265329   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.265470   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:15.322148   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:17.821371   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.740364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.742058   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.267349   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:20.763380   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:19.821495   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.822583   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.239196   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:23.239716   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.740472   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:22.764934   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.264695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:24.322074   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:26.324255   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:28.823261   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.208456   50022 kubeadm.go:787] kubelet initialised
	I1108 00:15:25.208482   50022 kubeadm.go:788] duration metric: took 48.897709945s waiting for restarted kubelet to initialise ...
	I1108 00:15:25.208492   50022 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:15:25.213730   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220419   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.220444   50022 pod_ready.go:81] duration metric: took 6.688227ms waiting for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220455   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225713   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.225734   50022 pod_ready.go:81] duration metric: took 5.271879ms waiting for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225742   50022 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231081   50022 pod_ready.go:92] pod "etcd-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.231102   50022 pod_ready.go:81] duration metric: took 5.353373ms waiting for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231113   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235653   50022 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.235676   50022 pod_ready.go:81] duration metric: took 4.554135ms waiting for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235687   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607677   50022 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.607702   50022 pod_ready.go:81] duration metric: took 372.006515ms waiting for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607715   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007866   50022 pod_ready.go:92] pod "kube-proxy-r8p96" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.007901   50022 pod_ready.go:81] duration metric: took 400.175462ms waiting for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007915   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.408998   50022 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.409023   50022 pod_ready.go:81] duration metric: took 401.100386ms waiting for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.409037   50022 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:28.714602   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.743907   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.242025   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.764799   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:29.765943   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:31.322316   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.821723   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.715349   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.213961   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.739648   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.238544   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.270073   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:34.764272   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.768065   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.322383   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:38.821688   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.215842   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.714618   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.239003   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.239229   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.266142   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.765225   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.822847   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.823419   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.214573   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.214623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.239832   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.740100   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.765773   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.767613   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.323162   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:47.323716   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:44.714312   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.714541   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.214939   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.238097   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.240079   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.740404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.266155   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.821171   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.821247   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.821754   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.715388   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.214072   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.239902   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.240606   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:52.764709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.765802   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.821843   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.822037   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:56.214628   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:58.215873   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.739805   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.742442   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.264640   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.265598   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:01.269674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.823743   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.321221   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:00.716761   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.717300   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.240157   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.740325   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:03.765956   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.266810   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.322200   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.325043   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.822004   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:05.214678   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:07.214757   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.741067   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.238455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.764592   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:10.764740   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.321882   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.323997   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.715347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:12.215814   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.238960   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.239188   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.239933   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.268590   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.767860   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.822286   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.323447   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:14.715001   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.214864   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:19.220945   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.743653   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.239877   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.267403   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.765825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.828982   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:23.322508   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:21.715604   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.215532   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.240232   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.240410   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.767921   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.266374   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.821672   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.323033   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.715605   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.215673   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.240493   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.739795   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:27.268851   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.765296   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.822234   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.822653   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:31.714216   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.714677   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.238984   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.239828   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.264549   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.765297   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.823243   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.321349   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.715073   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.715879   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.240347   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.739526   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.265284   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.764898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.322588   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:41.822017   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:40.214804   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.714783   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.238649   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.238830   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.265404   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.266352   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.763687   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.321389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.322294   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.822670   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:45.215415   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:47.715215   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.239884   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.740698   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:50.740725   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.765820   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.265744   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.321664   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.321945   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:49.715720   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:52.215540   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.239897   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.241013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.764035   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.767704   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.324156   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.821380   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:54.716014   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.213472   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.216084   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.740250   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.740808   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:58.264915   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:00.764064   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.823358   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.824897   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.827668   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.714273   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.714538   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.238718   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:04.239300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.766695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:05.268491   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.321926   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.822906   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.215268   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.215344   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.740893   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.240404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:07.764370   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.764952   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.765807   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.823030   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.320640   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.715494   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.214139   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.741308   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.741849   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:14.265117   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.265550   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.322703   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.822360   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.214808   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.214944   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:19.215663   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.239627   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.241991   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.742074   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.764043   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.764244   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.322245   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:22.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:21.715000   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.715813   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.240800   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.741203   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.264974   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.267122   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:24.823144   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.322674   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:26.215099   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.215710   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.242151   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.741098   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.765060   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.266360   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:29.821467   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:31.822093   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.714747   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.716931   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:33.241199   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.744300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.765221   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.766163   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.320569   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:36.321680   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.321803   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.215458   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.715660   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.241103   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.241689   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.264893   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:39.264980   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:41.764589   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.323069   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.822323   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.214357   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.215838   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.738943   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.738995   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.265516   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.764435   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.827347   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:47.321911   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.715762   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.716679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.214899   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.740204   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.766668   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.266657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.822604   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.823333   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.935354   50505 pod_ready.go:81] duration metric: took 4m0.000854035s waiting for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:51.935397   50505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:51.935438   50505 pod_ready.go:38] duration metric: took 4m11.589382956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:51.935470   50505 kubeadm.go:640] restartCluster took 4m31.32204509s
	W1108 00:17:51.935533   50505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:51.935560   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:17:51.715171   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.716530   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.244682   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.741272   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.743900   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.765757   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.766672   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:56.218347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.715621   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.246553   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:00.740366   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.265496   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.958296   50613 pod_ready.go:81] duration metric: took 4m0.000224971s waiting for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:58.958324   50613 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:58.958349   50613 pod_ready.go:38] duration metric: took 4m11.678298333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:58.958373   50613 kubeadm.go:640] restartCluster took 4m32.361691152s
	W1108 00:17:58.958429   50613 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:58.958455   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:01.214685   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.216848   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.239882   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:05.739403   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:06.321352   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.385768547s)
	I1108 00:18:06.321435   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:06.335385   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:06.345310   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:06.355261   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:06.355301   50505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:06.570938   50505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:05.715384   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.716056   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.739455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.740028   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.716612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:12.215477   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:11.742123   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:14.242024   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:15.847386   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.888899647s)
	I1108 00:18:15.847471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:15.865800   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:15.877857   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:15.888952   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:15.889014   50613 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:16.126155   50613 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:17.730060   50505 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:17.730164   50505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:17.730282   50505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:17.730411   50505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:17.730564   50505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:17.730648   50505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:17.732613   50505 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:17.732709   50505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:17.732788   50505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:17.732916   50505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:17.732995   50505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:17.733104   50505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:17.733186   50505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:17.733265   50505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:17.733344   50505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:17.733429   50505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:17.733526   50505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:17.733572   50505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:17.733640   50505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:17.733699   50505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:17.733763   50505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:17.733838   50505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:17.733905   50505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:17.734002   50505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:17.734088   50505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:17.735708   50505 out.go:204]   - Booting up control plane ...
	I1108 00:18:17.735808   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:17.735898   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:17.735981   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:17.736113   50505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:17.736209   50505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:17.736255   50505 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:17.736431   50505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:17.736517   50505 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503639 seconds
	I1108 00:18:17.736637   50505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:17.736779   50505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:17.736873   50505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:17.737093   50505 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-320390 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:17.737168   50505 kubeadm.go:322] [bootstrap-token] Using token: 8lntxi.1hule2axpc9kkhcs
	I1108 00:18:17.738763   50505 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:17.738904   50505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:17.739014   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:17.739197   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:17.739364   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:17.739534   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:17.739651   50505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:17.739781   50505 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:17.739829   50505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:17.739881   50505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:17.739889   50505 kubeadm.go:322] 
	I1108 00:18:17.739956   50505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:17.739964   50505 kubeadm.go:322] 
	I1108 00:18:17.740051   50505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:17.740065   50505 kubeadm.go:322] 
	I1108 00:18:17.740094   50505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:17.740165   50505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:17.740229   50505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:17.740239   50505 kubeadm.go:322] 
	I1108 00:18:17.740311   50505 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:17.740320   50505 kubeadm.go:322] 
	I1108 00:18:17.740375   50505 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:17.740385   50505 kubeadm.go:322] 
	I1108 00:18:17.740443   50505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:17.740528   50505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:17.740629   50505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:17.740640   50505 kubeadm.go:322] 
	I1108 00:18:17.740733   50505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:17.740840   50505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:17.740860   50505 kubeadm.go:322] 
	I1108 00:18:17.740959   50505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741077   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:17.741106   50505 kubeadm.go:322] 	--control-plane 
	I1108 00:18:17.741114   50505 kubeadm.go:322] 
	I1108 00:18:17.741207   50505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:17.741221   50505 kubeadm.go:322] 
	I1108 00:18:17.741312   50505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741435   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:17.741451   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:18:17.741460   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:17.742996   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:17.744307   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:17.800065   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:17.844561   50505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:17.844628   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:17.844636   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=no-preload-320390 minikube.k8s.io/updated_at=2023_11_08T00_18_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.268124   50505 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:18.268268   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.391271   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.999821   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:14.715492   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.716036   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:19.217395   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.739748   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:18.722551   51228 pod_ready.go:81] duration metric: took 4m0.000232672s waiting for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:18.722600   51228 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:18:18.722616   51228 pod_ready.go:38] duration metric: took 4m7.657742468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:18.722637   51228 kubeadm.go:640] restartCluster took 4m28.262375275s
	W1108 00:18:18.722722   51228 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:18:18.722756   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:19.500069   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.000575   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.500545   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.999918   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.499960   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.000673   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.499811   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.000501   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.499942   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.000407   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.217427   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:23.715751   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:27.224428   50613 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:27.224497   50613 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:27.224589   50613 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:27.224720   50613 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:27.224916   50613 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:27.225019   50613 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:27.226893   50613 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:27.227001   50613 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:27.227091   50613 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:27.227201   50613 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:27.227279   50613 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:27.227365   50613 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:27.227433   50613 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:27.227517   50613 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:27.227602   50613 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:27.227719   50613 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:27.227808   50613 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:27.227864   50613 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:27.227938   50613 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:27.228013   50613 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:27.228102   50613 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:27.228186   50613 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:27.228264   50613 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:27.228387   50613 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:27.228479   50613 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:27.229827   50613 out.go:204]   - Booting up control plane ...
	I1108 00:18:27.229950   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:27.230032   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:27.230124   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:27.230265   50613 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:27.230387   50613 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:27.230447   50613 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:27.230699   50613 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:27.230810   50613 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503846 seconds
	I1108 00:18:27.230970   50613 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:27.231145   50613 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:27.231237   50613 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:27.231478   50613 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-253253 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:27.231573   50613 kubeadm.go:322] [bootstrap-token] Using token: vyjibp.12wjj754q6czu5uo
	I1108 00:18:27.233159   50613 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:27.233266   50613 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:27.233340   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:27.233454   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:27.233558   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:27.233693   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:27.233793   50613 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:27.233943   50613 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:27.234012   50613 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:27.234074   50613 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:27.234086   50613 kubeadm.go:322] 
	I1108 00:18:27.234174   50613 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:27.234191   50613 kubeadm.go:322] 
	I1108 00:18:27.234300   50613 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:27.234310   50613 kubeadm.go:322] 
	I1108 00:18:27.234337   50613 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:27.234388   50613 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:27.234432   50613 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:27.234436   50613 kubeadm.go:322] 
	I1108 00:18:27.234490   50613 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:27.234507   50613 kubeadm.go:322] 
	I1108 00:18:27.234567   50613 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:27.234577   50613 kubeadm.go:322] 
	I1108 00:18:27.234651   50613 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:27.234756   50613 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:27.234858   50613 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:27.234873   50613 kubeadm.go:322] 
	I1108 00:18:27.234959   50613 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:27.235056   50613 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:27.235066   50613 kubeadm.go:322] 
	I1108 00:18:27.235184   50613 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235334   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:27.235369   50613 kubeadm.go:322] 	--control-plane 
	I1108 00:18:27.235378   50613 kubeadm.go:322] 
	I1108 00:18:27.235476   50613 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:27.235487   50613 kubeadm.go:322] 
	I1108 00:18:27.235585   50613 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235734   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:27.235751   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:18:27.235759   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:27.237411   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:24.499703   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.999659   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:25.499724   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.000534   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.500532   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.999903   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.500582   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.000156   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.500443   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.000019   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.213623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:28.214432   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:29.500525   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.999698   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.173272   50505 kubeadm.go:1081] duration metric: took 12.328709999s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:30.173304   50505 kubeadm.go:406] StartCluster complete in 5m9.613679996s
	I1108 00:18:30.173323   50505 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.173399   50505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:30.175022   50505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.175277   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:30.175394   50505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:30.175512   50505 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320390"
	I1108 00:18:30.175534   50505 addons.go:231] Setting addon storage-provisioner=true in "no-preload-320390"
	W1108 00:18:30.175546   50505 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:30.175591   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.175595   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:30.175648   50505 addons.go:69] Setting default-storageclass=true in profile "no-preload-320390"
	I1108 00:18:30.175669   50505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320390"
	I1108 00:18:30.175856   50505 addons.go:69] Setting metrics-server=true in profile "no-preload-320390"
	I1108 00:18:30.175880   50505 addons.go:231] Setting addon metrics-server=true in "no-preload-320390"
	W1108 00:18:30.175890   50505 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:30.175932   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.176004   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176047   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176074   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176110   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176255   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176297   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.193487   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
	I1108 00:18:30.194065   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.194643   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I1108 00:18:30.194791   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.194809   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195197   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.195244   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195454   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1108 00:18:30.195741   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.195758   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195840   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195975   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.196019   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.196254   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.196377   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.196401   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.196444   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.196747   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.197318   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.197365   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.200432   50505 addons.go:231] Setting addon default-storageclass=true in "no-preload-320390"
	W1108 00:18:30.200454   50505 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:30.200482   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.200858   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.200904   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.214840   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
	I1108 00:18:30.215335   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.215693   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.215710   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.216018   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.216163   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.216761   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I1108 00:18:30.217467   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.218005   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.218255   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.218276   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.218567   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.218686   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.218895   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I1108 00:18:30.219282   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.221453   50505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:30.219887   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.220152   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.227122   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.227187   50505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.227203   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:30.227220   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.229126   50505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:30.227716   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.230458   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231018   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.231625   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.231640   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:30.231664   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231663   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:30.231687   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.231871   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.232040   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.232130   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.232164   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.232167   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.234984   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235307   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.235327   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235589   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.235819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.236102   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.236409   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.248939   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I1108 00:18:30.249596   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.250088   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.250105   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.250535   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.250715   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.252631   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.252909   50505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.252923   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:30.252941   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.255926   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256320   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.256354   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256440   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.256639   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.256795   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.257009   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.299537   50505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-320390" context rescaled to 1 replicas
	I1108 00:18:30.299586   50505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:30.301520   50505 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:27.238758   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:27.263679   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:27.350198   50613 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:27.350271   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.350293   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=embed-certs-253253 minikube.k8s.io/updated_at=2023_11_08T00_18_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.409145   50613 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:27.761874   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.882030   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.495425   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.995764   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.495154   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.994859   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.495492   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.995328   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:31.495353   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.303227   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:30.426941   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:30.426964   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:30.450862   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.456250   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.482239   50505 node_ready.go:35] waiting up to 6m0s for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.482286   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:30.493041   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:30.493073   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:30.542548   50505 node_ready.go:49] node "no-preload-320390" has status "Ready":"True"
	I1108 00:18:30.542579   50505 node_ready.go:38] duration metric: took 60.300148ms waiting for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.542593   50505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:30.554527   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:30.554560   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:30.648882   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:30.658134   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:32.959227   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.50832393s)
	I1108 00:18:32.959242   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.502960333s)
	I1108 00:18:32.959281   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959287   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476976723s)
	I1108 00:18:32.959301   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959347   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959307   50505 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:32.959293   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959711   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959729   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959748   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959761   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959771   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959780   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959795   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959807   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.960123   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960137   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.960207   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:32.960229   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960237   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.007609   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.007641   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.007926   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.007945   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.106167   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.284838   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.626637787s)
	I1108 00:18:33.284900   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.284916   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285239   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285259   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285269   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.285278   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285579   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285612   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285626   50505 addons.go:467] Verifying addon metrics-server=true in "no-preload-320390"
	I1108 00:18:33.285579   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:33.288563   50505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:18:33.290062   50505 addons.go:502] enable addons completed in 3.114669599s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:18:30.231324   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:32.715318   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.473926   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.751140561s)
	I1108 00:18:33.473999   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:33.489630   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:33.501413   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:33.513531   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:33.513588   51228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:33.767243   51228 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:31.995169   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.494991   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.995423   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.494761   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.995099   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.494829   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.995699   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.495034   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.995563   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:36.494752   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.563227   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:37.563703   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:34.715399   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.717212   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:39.215769   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.995285   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.495447   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.995529   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.494898   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.995450   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.494831   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.994880   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:40.097031   50613 kubeadm.go:1081] duration metric: took 12.746819294s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:40.097074   50613 kubeadm.go:406] StartCluster complete in 5m13.552864243s
	I1108 00:18:40.097102   50613 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.097182   50613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:40.099232   50613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.099513   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:40.099522   50613 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:40.099603   50613 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-253253"
	I1108 00:18:40.099612   50613 addons.go:69] Setting default-storageclass=true in profile "embed-certs-253253"
	I1108 00:18:40.099625   50613 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-253253"
	I1108 00:18:40.099626   50613 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-253253"
	W1108 00:18:40.099635   50613 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:40.099675   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.099724   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:40.099769   50613 addons.go:69] Setting metrics-server=true in profile "embed-certs-253253"
	I1108 00:18:40.099783   50613 addons.go:231] Setting addon metrics-server=true in "embed-certs-253253"
	W1108 00:18:40.099791   50613 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:40.099827   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.100063   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100064   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100085   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100086   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100199   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100229   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.117281   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I1108 00:18:40.117806   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.118339   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.118364   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.118717   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.118761   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1108 00:18:40.119093   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.119311   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.119334   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.119497   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.119520   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.119668   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1108 00:18:40.119841   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.119970   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.120403   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.120436   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.120443   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.120456   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.120895   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.121048   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.123728   50613 addons.go:231] Setting addon default-storageclass=true in "embed-certs-253253"
	W1108 00:18:40.123746   50613 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:40.123774   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.124049   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.124073   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.139787   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I1108 00:18:40.140217   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.140776   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.140799   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.141358   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.143152   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I1108 00:18:40.143448   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.144341   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.145156   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.145175   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.145536   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.145695   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.146126   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.146151   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.147863   50613 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:40.149252   50613 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.149270   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:40.149288   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.149701   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41685
	I1108 00:18:40.150096   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.150599   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.150613   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.151053   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.151223   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.152047   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152462   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.152476   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152718   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.152834   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.152927   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.153008   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.153394   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.155041   50613 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:40.156603   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:40.156625   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:40.156642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.159550   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.159952   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.159973   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.160151   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.160294   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.160403   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.160505   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.162863   50613 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-253253" context rescaled to 1 replicas
	I1108 00:18:40.162890   50613 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:40.164733   50613 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:40.166082   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:40.167562   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1108 00:18:40.167938   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.168414   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.168433   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.168805   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.169056   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.170751   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.171377   50613 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.171389   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:40.171402   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.174508   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.174826   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.174859   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.175035   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.175182   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.175341   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.175467   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.387003   50613 node_ready.go:35] waiting up to 6m0s for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.387126   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:40.398413   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:40.398489   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:40.400162   50613 node_ready.go:49] node "embed-certs-253253" has status "Ready":"True"
	I1108 00:18:40.400189   50613 node_ready.go:38] duration metric: took 13.150355ms waiting for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.400204   50613 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:40.416263   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.420346   50613 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:40.441486   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.468701   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:40.468731   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:40.546438   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:40.546475   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:40.620999   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:41.963134   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.575984932s)
	I1108 00:18:41.963222   50613 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:41.963099   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.546802194s)
	I1108 00:18:41.963311   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963342   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.963771   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.963821   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.963843   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963862   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.964176   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.964202   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.964188   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.997903   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.997987   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.998341   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.998428   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.998487   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.447761   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006222409s)
	I1108 00:18:42.447810   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.447824   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.448092   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.448109   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.448110   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.448127   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.448143   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.449994   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.450013   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.450027   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.484250   50613 pod_ready.go:102] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:42.788997   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.167954058s)
	I1108 00:18:42.789042   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789342   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.789395   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789416   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789427   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789673   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789698   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789709   50613 addons.go:467] Verifying addon metrics-server=true in "embed-certs-253253"
	I1108 00:18:42.792162   50613 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:39.563860   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.565166   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:44.063902   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.216274   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:43.717636   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:45.631283   51228 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:45.631354   51228 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:45.631464   51228 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:45.631583   51228 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:45.631736   51228 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:45.631848   51228 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:45.633488   51228 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:45.633579   51228 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:45.633656   51228 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:45.633756   51228 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:45.633840   51228 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:45.633947   51228 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:45.634041   51228 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:45.634140   51228 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:45.634244   51228 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:45.634357   51228 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:45.634458   51228 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:45.634541   51228 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:45.634625   51228 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:45.634713   51228 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:45.634781   51228 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:45.634865   51228 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:45.634935   51228 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:45.635044   51228 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:45.635133   51228 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:45.636666   51228 out.go:204]   - Booting up control plane ...
	I1108 00:18:45.636755   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:45.636862   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:45.636939   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:45.637065   51228 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:45.637164   51228 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:45.637221   51228 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:45.637410   51228 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:45.637479   51228 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005347 seconds
	I1108 00:18:45.637583   51228 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:45.637710   51228 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:45.637782   51228 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:45.637961   51228 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-039263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:45.638007   51228 kubeadm.go:322] [bootstrap-token] Using token: ub1ww5.kh6zrwfrcg8jc9rc
	I1108 00:18:45.639491   51228 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:45.639627   51228 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:45.639743   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:45.639918   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:45.640060   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:45.640240   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:45.640344   51228 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:45.640487   51228 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:45.640546   51228 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:45.640625   51228 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:45.640643   51228 kubeadm.go:322] 
	I1108 00:18:45.640726   51228 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:45.640737   51228 kubeadm.go:322] 
	I1108 00:18:45.640850   51228 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:45.640860   51228 kubeadm.go:322] 
	I1108 00:18:45.640891   51228 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:45.640968   51228 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:45.641042   51228 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:45.641048   51228 kubeadm.go:322] 
	I1108 00:18:45.641124   51228 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:45.641137   51228 kubeadm.go:322] 
	I1108 00:18:45.641193   51228 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:45.641204   51228 kubeadm.go:322] 
	I1108 00:18:45.641266   51228 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:45.641372   51228 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:45.641485   51228 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:45.641493   51228 kubeadm.go:322] 
	I1108 00:18:45.641589   51228 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:45.641704   51228 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:45.641714   51228 kubeadm.go:322] 
	I1108 00:18:45.641815   51228 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.641939   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:45.641971   51228 kubeadm.go:322] 	--control-plane 
	I1108 00:18:45.641979   51228 kubeadm.go:322] 
	I1108 00:18:45.642084   51228 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:45.642093   51228 kubeadm.go:322] 
	I1108 00:18:45.642216   51228 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.642356   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:45.642372   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:18:45.642379   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:45.644712   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:45.646211   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:45.672621   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:45.700061   51228 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:45.700142   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.700153   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=default-k8s-diff-port-039263 minikube.k8s.io/updated_at=2023_11_08T00_18_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.805900   51228 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:42.794167   50613 addons.go:502] enable addons completed in 2.694639707s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:18:44.953906   50613 pod_ready.go:92] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.953928   50613 pod_ready.go:81] duration metric: took 4.533558234s waiting for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.953936   50613 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958854   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.958880   50613 pod_ready.go:81] duration metric: took 4.937561ms waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958892   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964282   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.964305   50613 pod_ready.go:81] duration metric: took 5.40486ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964317   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969544   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.969561   50613 pod_ready.go:81] duration metric: took 5.237377ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969568   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974340   50613 pod_ready.go:92] pod "kube-proxy-shp9z" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.974357   50613 pod_ready.go:81] duration metric: took 4.78369ms waiting for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974367   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350442   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.350465   50613 pod_ready.go:81] duration metric: took 376.091394ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350473   50613 pod_ready.go:38] duration metric: took 4.950259719s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.350487   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.350529   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.366477   50613 api_server.go:72] duration metric: took 5.203563902s to wait for apiserver process to appear ...
	I1108 00:18:45.366502   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.366519   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:18:45.375074   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:18:45.376646   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.376666   50613 api_server.go:131] duration metric: took 10.158963ms to wait for apiserver health ...
	I1108 00:18:45.376674   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.554560   50613 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.554598   50613 system_pods.go:61] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.554605   50613 system_pods.go:61] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.554611   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.554618   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.554624   50613 system_pods.go:61] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.554635   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.554655   50613 system_pods.go:61] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.554697   50613 system_pods.go:61] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.554712   50613 system_pods.go:74] duration metric: took 178.032339ms to wait for pod list to return data ...
	I1108 00:18:45.554722   50613 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.750181   50613 default_sa.go:45] found service account: "default"
	I1108 00:18:45.750210   50613 default_sa.go:55] duration metric: took 195.480878ms for default service account to be created ...
	I1108 00:18:45.750220   50613 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.953261   50613 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.953303   50613 system_pods.go:89] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.953312   50613 system_pods.go:89] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.953320   50613 system_pods.go:89] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.953329   50613 system_pods.go:89] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.953348   50613 system_pods.go:89] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.953360   50613 system_pods.go:89] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.953375   50613 system_pods.go:89] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.953387   50613 system_pods.go:89] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.953402   50613 system_pods.go:126] duration metric: took 203.174777ms to wait for k8s-apps to be running ...
	I1108 00:18:45.953414   50613 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.953471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.969669   50613 system_svc.go:56] duration metric: took 16.24852ms WaitForService to wait for kubelet.
	I1108 00:18:45.969698   50613 kubeadm.go:581] duration metric: took 5.806787278s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.969720   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.150807   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.150839   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.150853   50613 node_conditions.go:105] duration metric: took 181.127043ms to run NodePressure ...
	I1108 00:18:46.150866   50613 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.150876   50613 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.150886   50613 start.go:242] writing updated cluster config ...
	I1108 00:18:46.151185   50613 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.209047   50613 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.211074   50613 out.go:177] * Done! kubectl is now configured to use "embed-certs-253253" cluster and "default" namespace by default
	I1108 00:18:44.564102   50505 pod_ready.go:97] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerS
tateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564132   50505 pod_ready.go:81] duration metric: took 13.91522436s waiting for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:44.564147   50505 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runni
ng:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564158   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573431   50505 pod_ready.go:92] pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.573462   50505 pod_ready.go:81] duration metric: took 9.295648ms waiting for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573473   50505 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580792   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.580828   50505 pod_ready.go:81] duration metric: took 7.346504ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580840   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587095   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.587117   50505 pod_ready.go:81] duration metric: took 6.268891ms waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587130   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594022   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.594039   50505 pod_ready.go:81] duration metric: took 6.901477ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594052   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960144   50505 pod_ready.go:92] pod "kube-proxy-m6k8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.960162   50505 pod_ready.go:81] duration metric: took 366.102529ms waiting for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960173   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361366   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.361388   50505 pod_ready.go:81] duration metric: took 401.208779ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361396   50505 pod_ready.go:38] duration metric: took 14.818791823s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.361408   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.361453   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.377632   50505 api_server.go:72] duration metric: took 15.078013421s to wait for apiserver process to appear ...
	I1108 00:18:45.377656   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.377673   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:18:45.383912   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:18:45.385131   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.385153   50505 api_server.go:131] duration metric: took 7.489916ms to wait for apiserver health ...
	I1108 00:18:45.385163   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.565081   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.565112   50505 system_pods.go:61] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.565120   50505 system_pods.go:61] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.565127   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.565134   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.565141   50505 system_pods.go:61] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.565149   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.565157   50505 system_pods.go:61] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.565171   50505 system_pods.go:61] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.565185   50505 system_pods.go:74] duration metric: took 180.015317ms to wait for pod list to return data ...
	I1108 00:18:45.565196   50505 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.760190   50505 default_sa.go:45] found service account: "default"
	I1108 00:18:45.760217   50505 default_sa.go:55] duration metric: took 195.014175ms for default service account to be created ...
	I1108 00:18:45.760227   50505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.966186   50505 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.966223   50505 system_pods.go:89] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.966231   50505 system_pods.go:89] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.966239   50505 system_pods.go:89] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.966245   50505 system_pods.go:89] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.966252   50505 system_pods.go:89] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.966259   50505 system_pods.go:89] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.966268   50505 system_pods.go:89] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.966279   50505 system_pods.go:89] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.966294   50505 system_pods.go:126] duration metric: took 206.05956ms to wait for k8s-apps to be running ...
	I1108 00:18:45.966305   50505 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.966355   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.984753   50505 system_svc.go:56] duration metric: took 18.427005ms WaitForService to wait for kubelet.
	I1108 00:18:45.984781   50505 kubeadm.go:581] duration metric: took 15.685164805s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.984803   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.159568   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.159602   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.159615   50505 node_conditions.go:105] duration metric: took 174.805156ms to run NodePressure ...
	I1108 00:18:46.159627   50505 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.159636   50505 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.159649   50505 start.go:242] writing updated cluster config ...
	I1108 00:18:46.159934   50505 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.220234   50505 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.222217   50505 out.go:177] * Done! kubectl is now configured to use "no-preload-320390" cluster and "default" namespace by default
	I1108 00:18:46.222047   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:48.714709   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:46.109921   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.223968   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.849987   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.349982   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.850871   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.350081   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.850494   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.350809   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.850515   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.350227   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.850044   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.714976   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:53.214612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:51.350594   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:51.850705   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.349971   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.850530   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.350696   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.850039   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.350523   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.849805   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.350560   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.849890   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.350679   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.849863   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.350004   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.850463   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.349999   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.850810   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.958213   51228 kubeadm.go:1081] duration metric: took 13.258132625s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:58.958253   51228 kubeadm.go:406] StartCluster complete in 5m8.559036824s
	I1108 00:18:58.958281   51228 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.958371   51228 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:58.960083   51228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.960306   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:58.960417   51228 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:58.960497   51228 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960505   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:58.960517   51228 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960544   51228 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-039263"
	I1108 00:18:58.960521   51228 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-039263"
	I1108 00:18:58.960538   51228 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960588   51228 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.960607   51228 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:58.960654   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	W1108 00:18:58.960566   51228 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:58.960732   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.961043   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961079   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961112   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961115   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961155   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961164   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.980365   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I1108 00:18:58.980386   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I1108 00:18:58.980512   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I1108 00:18:58.980860   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980912   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980863   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.981328   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981350   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981466   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981477   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981483   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981863   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.982023   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:58.982419   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982429   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982447   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.982464   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.985852   51228 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.985875   51228 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:58.985902   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.986359   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.986390   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.996161   51228 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-039263" context rescaled to 1 replicas
	I1108 00:18:58.996200   51228 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:58.998257   51228 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:58.999857   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:58.999917   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I1108 00:18:58.998777   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1108 00:18:59.000380   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001040   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001093   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001205   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001478   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.001674   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001690   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001762   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.002038   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.002209   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.003822   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006057   51228 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:59.004254   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006174   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I1108 00:18:59.007678   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:59.007688   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:59.007706   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.009545   51228 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:55.714548   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:57.715173   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:59.007989   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.010470   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.010632   51228 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.010640   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:59.010653   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.011015   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.011039   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.011227   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.011250   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.011650   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.011657   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.012158   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:59.012188   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:59.012671   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.012805   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.012925   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.013938   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014329   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.014348   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014493   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.014645   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.014770   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.014879   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.030160   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I1108 00:18:59.030558   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.031087   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.031101   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.031353   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.031558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.033203   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.033540   51228 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.033556   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:59.033573   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.036749   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.037177   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.037551   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.037684   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.037791   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.349254   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.451588   51228 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.451664   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:59.464584   51228 node_ready.go:49] node "default-k8s-diff-port-039263" has status "Ready":"True"
	I1108 00:18:59.464616   51228 node_ready.go:38] duration metric: took 12.97792ms waiting for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.464629   51228 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:59.475428   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:59.481740   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.483627   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:59.483644   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:59.599214   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:59.599244   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:59.661512   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:59.661537   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:59.726775   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:01.455332   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.003642063s)
	I1108 00:19:01.455368   51228 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1108 00:19:01.455575   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.106281369s)
	I1108 00:19:01.455635   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.455659   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.455957   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456004   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456026   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.456048   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.456296   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456332   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456339   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.485941   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.485970   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.486229   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.486287   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.486294   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.599500   51228 pod_ready.go:102] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:01.893463   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.411687372s)
	I1108 00:19:01.893518   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893530   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.893844   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.893887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.893904   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.893918   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893928   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.894199   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.894215   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.421714   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694889947s)
	I1108 00:19:02.421768   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.421785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422098   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422123   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422141   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.422160   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422138   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422467   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422480   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422492   51228 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-039263"
	I1108 00:19:02.424446   51228 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:59.715708   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.214990   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.426041   51228 addons.go:502] enable addons completed in 3.465624772s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:19:02.549025   51228 pod_ready.go:97] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:
2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549056   51228 pod_ready.go:81] duration metric: took 3.073604936s waiting for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:02.549069   51228 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&Conta
inerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549076   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096421   51228 pod_ready.go:92] pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.096449   51228 pod_ready.go:81] duration metric: took 547.365037ms waiting for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096461   51228 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104473   51228 pod_ready.go:92] pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.104497   51228 pod_ready.go:81] duration metric: took 8.028055ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104509   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108940   51228 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.108965   51228 pod_ready.go:81] duration metric: took 4.447315ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108976   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458803   51228 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.458831   51228 pod_ready.go:81] duration metric: took 349.845574ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458844   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256435   51228 pod_ready.go:92] pod "kube-proxy-rhdhg" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.256457   51228 pod_ready.go:81] duration metric: took 797.605956ms waiting for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256466   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655727   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.655750   51228 pod_ready.go:81] duration metric: took 399.277263ms waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655758   51228 pod_ready.go:38] duration metric: took 5.191103655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:04.655772   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:19:04.655823   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:19:04.671030   51228 api_server.go:72] duration metric: took 5.674798555s to wait for apiserver process to appear ...
	I1108 00:19:04.671059   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:19:04.671076   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:19:04.677315   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:19:04.678430   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:19:04.678451   51228 api_server.go:131] duration metric: took 7.384898ms to wait for apiserver health ...
	I1108 00:19:04.678457   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:19:04.866585   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:19:04.866617   51228 system_pods.go:61] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:04.866622   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:04.866626   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:04.866631   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:04.866635   51228 system_pods.go:61] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:04.866639   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:04.866666   51228 system_pods.go:61] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:04.866676   51228 system_pods.go:61] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:04.866684   51228 system_pods.go:74] duration metric: took 188.222131ms to wait for pod list to return data ...
	I1108 00:19:04.866691   51228 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:19:05.056224   51228 default_sa.go:45] found service account: "default"
	I1108 00:19:05.056251   51228 default_sa.go:55] duration metric: took 189.551289ms for default service account to be created ...
	I1108 00:19:05.056263   51228 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:19:05.259774   51228 system_pods.go:86] 8 kube-system pods found
	I1108 00:19:05.259800   51228 system_pods.go:89] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:05.259805   51228 system_pods.go:89] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:05.259810   51228 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:05.259814   51228 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:05.259818   51228 system_pods.go:89] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:05.259822   51228 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:05.259828   51228 system_pods.go:89] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:05.259832   51228 system_pods.go:89] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:05.259840   51228 system_pods.go:126] duration metric: took 203.572791ms to wait for k8s-apps to be running ...
	I1108 00:19:05.259846   51228 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:19:05.259889   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:05.274254   51228 system_svc.go:56] duration metric: took 14.400341ms WaitForService to wait for kubelet.
	I1108 00:19:05.274277   51228 kubeadm.go:581] duration metric: took 6.278053459s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:19:05.274304   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:19:05.457057   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:19:05.457086   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:19:05.457097   51228 node_conditions.go:105] duration metric: took 182.787127ms to run NodePressure ...
	I1108 00:19:05.457107   51228 start.go:228] waiting for startup goroutines ...
	I1108 00:19:05.457113   51228 start.go:233] waiting for cluster config update ...
	I1108 00:19:05.457122   51228 start.go:242] writing updated cluster config ...
	I1108 00:19:05.457358   51228 ssh_runner.go:195] Run: rm -f paused
	I1108 00:19:05.507414   51228 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:19:05.509695   51228 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-039263" cluster and "default" namespace by default
	I1108 00:19:04.715259   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:07.214815   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:09.214886   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:11.715679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:14.215690   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:16.716315   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:19.215323   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:21.715872   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:24.215543   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:26.409609   50022 pod_ready.go:81] duration metric: took 4m0.000552573s waiting for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:26.409644   50022 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:19:26.409659   50022 pod_ready.go:38] duration metric: took 4m1.201158343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:26.409684   50022 kubeadm.go:640] restartCluster took 5m11.212754497s
	W1108 00:19:26.409757   50022 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:19:26.409790   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:19:31.401367   50022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.991549602s)
	I1108 00:19:31.401473   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:31.415823   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:19:31.425384   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:19:31.435585   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:19:31.435635   50022 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1108 00:19:31.492015   50022 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1108 00:19:31.492120   50022 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:19:31.649293   50022 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:19:31.649437   50022 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:19:31.649605   50022 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:19:31.886799   50022 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:19:31.886955   50022 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:19:31.896062   50022 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1108 00:19:32.038269   50022 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:19:32.040677   50022 out.go:204]   - Generating certificates and keys ...
	I1108 00:19:32.040833   50022 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:19:32.040945   50022 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:19:32.041037   50022 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:19:32.041085   50022 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:19:32.041142   50022 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:19:32.041231   50022 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:19:32.041346   50022 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:19:32.041441   50022 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:19:32.041594   50022 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:19:32.042173   50022 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:19:32.042236   50022 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:19:32.042302   50022 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:19:32.325005   50022 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:19:32.544755   50022 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:19:32.726539   50022 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:19:32.905403   50022 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:19:32.906525   50022 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:19:32.908371   50022 out.go:204]   - Booting up control plane ...
	I1108 00:19:32.908514   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:19:32.919163   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:19:32.919256   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:19:32.919387   50022 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:19:32.928261   50022 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:19:42.937037   50022 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.006146 seconds
	I1108 00:19:42.937215   50022 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:19:42.955795   50022 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:19:43.479726   50022 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:19:43.479868   50022 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-590541 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1108 00:19:43.989897   50022 kubeadm.go:322] [bootstrap-token] Using token: rpiq38.6eoemv6ygv6ghnel
	I1108 00:19:43.991262   50022 out.go:204]   - Configuring RBAC rules ...
	I1108 00:19:43.991391   50022 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:19:44.001502   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:19:44.006931   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:19:44.012505   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:19:44.021422   50022 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:19:44.111517   50022 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:19:44.412934   50022 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:19:44.412985   50022 kubeadm.go:322] 
	I1108 00:19:44.413073   50022 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:19:44.413088   50022 kubeadm.go:322] 
	I1108 00:19:44.413186   50022 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:19:44.413196   50022 kubeadm.go:322] 
	I1108 00:19:44.413230   50022 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:19:44.413317   50022 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:19:44.413388   50022 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:19:44.413398   50022 kubeadm.go:322] 
	I1108 00:19:44.413489   50022 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:19:44.413608   50022 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:19:44.413704   50022 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:19:44.413720   50022 kubeadm.go:322] 
	I1108 00:19:44.413851   50022 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1108 00:19:44.413974   50022 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:19:44.413988   50022 kubeadm.go:322] 
	I1108 00:19:44.414090   50022 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414288   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:19:44.414337   50022 kubeadm.go:322]     --control-plane 	  
	I1108 00:19:44.414347   50022 kubeadm.go:322] 
	I1108 00:19:44.414458   50022 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:19:44.414474   50022 kubeadm.go:322] 
	I1108 00:19:44.414593   50022 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414754   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:19:44.416038   50022 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:19:44.416063   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:19:44.416073   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:19:44.417877   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:19:44.419195   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:19:44.448380   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:19:44.474228   50022 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:19:44.474339   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.474380   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=old-k8s-version-590541 minikube.k8s.io/updated_at=2023_11_08T00_19_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.739449   50022 ops.go:34] apiserver oom_adj: -16
	I1108 00:19:44.739605   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.848712   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.444347   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.944721   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.444140   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.944185   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.444342   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.944227   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.443941   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.944002   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.444440   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.943801   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.444481   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.944720   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.443857   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.943755   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.444663   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.944052   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.443917   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.943763   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.443886   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.944615   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.444156   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.944693   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.443823   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.944727   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.444188   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.943966   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.444659   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.944651   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:59.061808   50022 kubeadm.go:1081] duration metric: took 14.587519972s to wait for elevateKubeSystemPrivileges.
	I1108 00:19:59.061855   50022 kubeadm.go:406] StartCluster complete in 5m43.925088245s
	I1108 00:19:59.061878   50022 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.061962   50022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:19:59.063740   50022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.064004   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:19:59.064107   50022 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:19:59.064182   50022 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064198   50022 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064213   50022 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-590541"
	W1108 00:19:59.064222   50022 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:19:59.064224   50022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-590541"
	I1108 00:19:59.064233   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:19:59.064236   50022 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064260   50022 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:19:59.064265   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	W1108 00:19:59.064274   50022 addons.go:240] addon metrics-server should already be in state true
	I1108 00:19:59.064406   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.064720   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064757   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064761   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.064797   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.065271   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.065309   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.082041   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
	I1108 00:19:59.082534   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.083051   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.083075   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.083432   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.083970   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.084022   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.084099   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I1108 00:19:59.084222   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I1108 00:19:59.084440   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084605   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084870   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.084887   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085151   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.085174   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085248   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.085427   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.085480   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.086399   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.086442   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.090677   50022 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-590541"
	W1108 00:19:59.090700   50022 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:19:59.090728   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.091092   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.091130   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.101788   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I1108 00:19:59.102208   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.102631   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.102648   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.103029   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.103219   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.104809   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I1108 00:19:59.104937   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.106844   50022 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:19:59.105475   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.108350   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:19:59.108374   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:19:59.108403   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.108551   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I1108 00:19:59.108910   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.108930   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.109878   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.109881   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.110039   50022 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-590541" context rescaled to 1 replicas
	I1108 00:19:59.110075   50022 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:19:59.111637   50022 out.go:177] * Verifying Kubernetes components...
	I1108 00:19:59.110208   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.110398   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.113108   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.113220   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:59.113743   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.113792   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.114471   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.114510   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.115179   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.117011   50022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:19:59.115897   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.116172   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.118325   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.118358   50022 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.118370   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:19:59.118383   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.118504   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.118696   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.118854   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.120889   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121255   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.121280   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121465   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.121647   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.121783   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.121868   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.135569   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I1108 00:19:59.135977   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.136428   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.136441   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.136799   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.137027   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.138503   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.138735   50022 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.138745   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:19:59.138758   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.141494   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.141870   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.141895   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.142046   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.142248   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.142370   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.142592   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.281321   50022 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.281572   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:19:59.284783   50022 node_ready.go:49] node "old-k8s-version-590541" has status "Ready":"True"
	I1108 00:19:59.284804   50022 node_ready.go:38] duration metric: took 3.444344ms waiting for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.284830   50022 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:59.290322   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:59.290908   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:19:59.290925   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:19:59.311485   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.346809   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.350361   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:19:59.350385   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:19:59.403305   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:59.403328   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:19:59.479823   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:20:00.224554   50022 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1108 00:20:00.659427   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.347903115s)
	I1108 00:20:00.659441   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.312604515s)
	I1108 00:20:00.659501   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659533   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659536   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659549   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659834   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.659857   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.659867   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659876   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659933   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.659981   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660022   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660051   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.660062   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.660131   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.660242   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660254   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660300   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660321   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.851614   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.851637   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.851930   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.851996   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.852027   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992341   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.5124613s)
	I1108 00:20:00.992412   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992429   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.992774   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.992811   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.992830   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992841   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992854   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.993100   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.993122   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.993162   50022 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:20:00.995051   50022 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:20:00.996839   50022 addons.go:502] enable addons completed in 1.932740124s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:20:01.324759   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:03.823744   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:06.322994   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:08.822755   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:10.823247   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:12.819017   50022 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819052   50022 pod_ready.go:81] duration metric: took 13.528699598s waiting for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	E1108 00:20:12.819067   50022 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819075   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825970   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.825988   50022 pod_ready.go:81] duration metric: took 6.906077ms waiting for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825996   50022 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830826   50022 pod_ready.go:92] pod "kube-proxy-p27g4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.830843   50022 pod_ready.go:81] duration metric: took 4.841517ms waiting for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830852   50022 pod_ready.go:38] duration metric: took 13.54601076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:20:12.830866   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:20:12.830909   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:20:12.849600   50022 api_server.go:72] duration metric: took 13.739491815s to wait for apiserver process to appear ...
	I1108 00:20:12.849634   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:20:12.849653   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:20:12.856740   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:20:12.857940   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:20:12.857960   50022 api_server.go:131] duration metric: took 8.319568ms to wait for apiserver health ...
	I1108 00:20:12.857967   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:20:12.862192   50022 system_pods.go:59] 4 kube-system pods found
	I1108 00:20:12.862217   50022 system_pods.go:61] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.862222   50022 system_pods.go:61] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.862230   50022 system_pods.go:61] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.862239   50022 system_pods.go:61] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.862248   50022 system_pods.go:74] duration metric: took 4.275078ms to wait for pod list to return data ...
	I1108 00:20:12.862257   50022 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:20:12.867018   50022 default_sa.go:45] found service account: "default"
	I1108 00:20:12.867043   50022 default_sa.go:55] duration metric: took 4.778337ms for default service account to be created ...
	I1108 00:20:12.867052   50022 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:20:12.871638   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:12.871664   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.871671   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.871682   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.871688   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.871706   50022 retry.go:31] will retry after 307.408821ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.184897   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.184927   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.184944   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.184954   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.184963   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.184984   50022 retry.go:31] will retry after 301.786347ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.492026   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.492053   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.492058   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.492065   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.492070   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.492085   50022 retry.go:31] will retry after 396.219719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.893320   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.893348   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.893356   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.893366   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.893372   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.893390   50022 retry.go:31] will retry after 592.540002ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:14.490613   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:14.490638   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:14.490644   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:14.490651   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:14.490655   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:14.490670   50022 retry.go:31] will retry after 512.19038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.008506   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.008533   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.008539   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.008545   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.008586   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.008606   50022 retry.go:31] will retry after 704.779032ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.719115   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.719140   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.719145   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.719152   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.719156   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.719174   50022 retry.go:31] will retry after 892.457504ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:16.616738   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:16.616764   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:16.616770   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:16.616776   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:16.616781   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:16.616795   50022 retry.go:31] will retry after 1.107800827s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:17.729962   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:17.729989   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:17.729997   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:17.730007   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:17.730014   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:17.730032   50022 retry.go:31] will retry after 1.24176205s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:18.976866   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:18.976891   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:18.976897   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:18.976905   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:18.976910   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:18.976925   50022 retry.go:31] will retry after 1.449825188s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:20.432723   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:20.432753   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:20.432760   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:20.432770   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:20.432776   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:20.432796   50022 retry.go:31] will retry after 1.764186569s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:22.202432   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:22.202465   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:22.202473   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:22.202484   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:22.202491   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:22.202522   50022 retry.go:31] will retry after 3.392893976s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:25.600685   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:25.600712   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:25.600717   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:25.600723   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:25.600728   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:25.600743   50022 retry.go:31] will retry after 3.537590817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:29.143439   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:29.143464   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:29.143468   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:29.143475   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:29.143482   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:29.143502   50022 retry.go:31] will retry after 3.82527374s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:32.973763   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:32.973796   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:32.973804   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:32.973814   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:32.973821   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:32.973840   50022 retry.go:31] will retry after 6.225201923s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:39.204648   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:39.204682   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:39.204690   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:39.204702   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:39.204710   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:39.204729   50022 retry.go:31] will retry after 7.177772259s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:46.388992   50022 system_pods.go:86] 5 kube-system pods found
	I1108 00:20:46.389016   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:46.389022   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Pending
	I1108 00:20:46.389025   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:46.389032   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:46.389037   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:46.389052   50022 retry.go:31] will retry after 8.995080935s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:55.391202   50022 system_pods.go:86] 7 kube-system pods found
	I1108 00:20:55.391228   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:55.391233   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:20:55.391237   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:20:55.391241   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:55.391245   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Pending
	I1108 00:20:55.391252   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:55.391256   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:55.391272   50022 retry.go:31] will retry after 10.028239262s: missing components: kube-controller-manager, kube-scheduler
	I1108 00:21:05.426292   50022 system_pods.go:86] 8 kube-system pods found
	I1108 00:21:05.426317   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:21:05.426323   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:21:05.426327   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:21:05.426331   50022 system_pods.go:89] "kube-controller-manager-old-k8s-version-590541" [90563d50-3d48-4256-ae70-82a2a6d1c251] Running
	I1108 00:21:05.426335   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:21:05.426339   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Running
	I1108 00:21:05.426345   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:21:05.426349   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:21:05.426356   50022 system_pods.go:126] duration metric: took 52.559298515s to wait for k8s-apps to be running ...
	I1108 00:21:05.426363   50022 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:21:05.426403   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:21:05.443281   50022 system_svc.go:56] duration metric: took 16.903571ms WaitForService to wait for kubelet.
	I1108 00:21:05.443315   50022 kubeadm.go:581] duration metric: took 1m6.333213694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:21:05.443337   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:21:05.447040   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:21:05.447064   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:21:05.447074   50022 node_conditions.go:105] duration metric: took 3.731788ms to run NodePressure ...
	I1108 00:21:05.447083   50022 start.go:228] waiting for startup goroutines ...
	I1108 00:21:05.447089   50022 start.go:233] waiting for cluster config update ...
	I1108 00:21:05.447098   50022 start.go:242] writing updated cluster config ...
	I1108 00:21:05.447409   50022 ssh_runner.go:195] Run: rm -f paused
	I1108 00:21:05.496203   50022 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1108 00:21:05.498233   50022 out.go:177] 
	W1108 00:21:05.499660   50022 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1108 00:21:05.500985   50022 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1108 00:21:05.502464   50022 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-590541" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-08 00:13:11 UTC, ends at Wed 2023-11-08 00:27:48 UTC. --
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.011527040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403268011510291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4b57ead5-8f52-4629-b5da-f5578d1ca6ef name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.012300661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d4d79744-9d65-4cfd-b2be-527cfef6d854 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.012377098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d4d79744-9d65-4cfd-b2be-527cfef6d854 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.012557397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0,PodSandboxId:e704b69630a14bc150790444bb9f5922934520bd59b741034b3af030dd3154bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402723944311961,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa05e7e5-87e7-43ac-af74-1c8a713b51c5,},Annotations:map[string]string{io.kubernetes.container.hash: f08330f1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205,PodSandboxId:d39a130850a3305fe58ff1962843f8f4abf944490777b24aa7bd64ee8f734a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402723536620550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-thtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3671b72-d562-4be2-9942-e971ee31b2c3,},Annotations:map[string]string{io.kubernetes.container.hash: 4e6a9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9,PodSandboxId:5d7fbc7f78bd27da40d11ae605c7c5545720800493c6651c0f3a24d40665dd5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402721164054403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shp9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cda240f2-977b-4318-9ee4-74f0090af489,},Annotations:map[string]string{io.kubernetes.container.hash: d10e2de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2,PodSandboxId:27d4c7691e43457e1dae6953ce7530ad9a019bf5ff5121dc0a25dfac10c95fc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402699250172747,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-253253,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: dece3072a963622363344a68ed68f60a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806,PodSandboxId:fe825b3fb7a8fab84c5cfcf27725b9039ef08f7add8c41908d01f9050c44bc5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402699278926824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-253253,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: fd3dad67cbb105bf1c12cfa4d77a5516,},Annotations:map[string]string{io.kubernetes.container.hash: 460b5609,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613,PodSandboxId:f4f214e73ec2525d4a6ed2b0a4f16328c717f1110c3d5ce773f0c67603c24bd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402698892453284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12f202cfa4431635b8e608b4139d09ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5,PodSandboxId:4c8572e0a6c42cf4a4f04757b8b3c240f6fddedc7403ae4b04dcb5ca209adc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402698987017299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be66e0dfe0c5d13f7ee475b7a4c8e76b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4db098a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d4d79744-9d65-4cfd-b2be-527cfef6d854 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.062535955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f3e9428e-fadc-4a5a-bba6-245fcd7f2241 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.062657197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f3e9428e-fadc-4a5a-bba6-245fcd7f2241 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.064973285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=84aab265-6666-496e-99fc-9a4cc74974ee name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.065416780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403268065399090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=84aab265-6666-496e-99fc-9a4cc74974ee name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.066135113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=95cacd1f-c573-444f-b349-700f8ac4f772 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.066240116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=95cacd1f-c573-444f-b349-700f8ac4f772 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.066439258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0,PodSandboxId:e704b69630a14bc150790444bb9f5922934520bd59b741034b3af030dd3154bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402723944311961,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa05e7e5-87e7-43ac-af74-1c8a713b51c5,},Annotations:map[string]string{io.kubernetes.container.hash: f08330f1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205,PodSandboxId:d39a130850a3305fe58ff1962843f8f4abf944490777b24aa7bd64ee8f734a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402723536620550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-thtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3671b72-d562-4be2-9942-e971ee31b2c3,},Annotations:map[string]string{io.kubernetes.container.hash: 4e6a9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9,PodSandboxId:5d7fbc7f78bd27da40d11ae605c7c5545720800493c6651c0f3a24d40665dd5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402721164054403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shp9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cda240f2-977b-4318-9ee4-74f0090af489,},Annotations:map[string]string{io.kubernetes.container.hash: d10e2de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2,PodSandboxId:27d4c7691e43457e1dae6953ce7530ad9a019bf5ff5121dc0a25dfac10c95fc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402699250172747,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-253253,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: dece3072a963622363344a68ed68f60a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806,PodSandboxId:fe825b3fb7a8fab84c5cfcf27725b9039ef08f7add8c41908d01f9050c44bc5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402699278926824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-253253,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: fd3dad67cbb105bf1c12cfa4d77a5516,},Annotations:map[string]string{io.kubernetes.container.hash: 460b5609,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613,PodSandboxId:f4f214e73ec2525d4a6ed2b0a4f16328c717f1110c3d5ce773f0c67603c24bd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402698892453284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12f202cfa4431635b8e608b4139d09ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5,PodSandboxId:4c8572e0a6c42cf4a4f04757b8b3c240f6fddedc7403ae4b04dcb5ca209adc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402698987017299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be66e0dfe0c5d13f7ee475b7a4c8e76b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4db098a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=95cacd1f-c573-444f-b349-700f8ac4f772 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.113378285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=290fd5e7-bdad-425c-a657-90e253d4d849 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.113526104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=290fd5e7-bdad-425c-a657-90e253d4d849 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.115359908Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f97f2597-6f19-4295-bdbc-d8b01c58012a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.116144639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403268116120424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f97f2597-6f19-4295-bdbc-d8b01c58012a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.117153368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3159d3fc-5cb4-415d-accb-f808a8fd51cc name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.117255041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3159d3fc-5cb4-415d-accb-f808a8fd51cc name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.117544219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0,PodSandboxId:e704b69630a14bc150790444bb9f5922934520bd59b741034b3af030dd3154bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402723944311961,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa05e7e5-87e7-43ac-af74-1c8a713b51c5,},Annotations:map[string]string{io.kubernetes.container.hash: f08330f1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205,PodSandboxId:d39a130850a3305fe58ff1962843f8f4abf944490777b24aa7bd64ee8f734a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402723536620550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-thtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3671b72-d562-4be2-9942-e971ee31b2c3,},Annotations:map[string]string{io.kubernetes.container.hash: 4e6a9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9,PodSandboxId:5d7fbc7f78bd27da40d11ae605c7c5545720800493c6651c0f3a24d40665dd5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402721164054403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shp9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cda240f2-977b-4318-9ee4-74f0090af489,},Annotations:map[string]string{io.kubernetes.container.hash: d10e2de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2,PodSandboxId:27d4c7691e43457e1dae6953ce7530ad9a019bf5ff5121dc0a25dfac10c95fc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402699250172747,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-253253,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: dece3072a963622363344a68ed68f60a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806,PodSandboxId:fe825b3fb7a8fab84c5cfcf27725b9039ef08f7add8c41908d01f9050c44bc5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402699278926824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-253253,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: fd3dad67cbb105bf1c12cfa4d77a5516,},Annotations:map[string]string{io.kubernetes.container.hash: 460b5609,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613,PodSandboxId:f4f214e73ec2525d4a6ed2b0a4f16328c717f1110c3d5ce773f0c67603c24bd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402698892453284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12f202cfa4431635b8e608b4139d09ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5,PodSandboxId:4c8572e0a6c42cf4a4f04757b8b3c240f6fddedc7403ae4b04dcb5ca209adc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402698987017299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be66e0dfe0c5d13f7ee475b7a4c8e76b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4db098a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3159d3fc-5cb4-415d-accb-f808a8fd51cc name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.178236096Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0517d3d3-df3f-4b17-8eb3-45938201902c name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.178324402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0517d3d3-df3f-4b17-8eb3-45938201902c name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.179788755Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ae9a9a44-dac8-43dd-b9c1-def57b633690 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.180177557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403268180163343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ae9a9a44-dac8-43dd-b9c1-def57b633690 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.181084742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=09b27a5c-ef90-4046-a5fa-872ce3ac4113 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.181129931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=09b27a5c-ef90-4046-a5fa-872ce3ac4113 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 embed-certs-253253 crio[727]: time="2023-11-08 00:27:48.181361957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0,PodSandboxId:e704b69630a14bc150790444bb9f5922934520bd59b741034b3af030dd3154bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402723944311961,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa05e7e5-87e7-43ac-af74-1c8a713b51c5,},Annotations:map[string]string{io.kubernetes.container.hash: f08330f1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205,PodSandboxId:d39a130850a3305fe58ff1962843f8f4abf944490777b24aa7bd64ee8f734a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402723536620550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-thtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3671b72-d562-4be2-9942-e971ee31b2c3,},Annotations:map[string]string{io.kubernetes.container.hash: 4e6a9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9,PodSandboxId:5d7fbc7f78bd27da40d11ae605c7c5545720800493c6651c0f3a24d40665dd5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402721164054403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shp9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cda240f2-977b-4318-9ee4-74f0090af489,},Annotations:map[string]string{io.kubernetes.container.hash: d10e2de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2,PodSandboxId:27d4c7691e43457e1dae6953ce7530ad9a019bf5ff5121dc0a25dfac10c95fc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402699250172747,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-253253,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: dece3072a963622363344a68ed68f60a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806,PodSandboxId:fe825b3fb7a8fab84c5cfcf27725b9039ef08f7add8c41908d01f9050c44bc5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402699278926824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-253253,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: fd3dad67cbb105bf1c12cfa4d77a5516,},Annotations:map[string]string{io.kubernetes.container.hash: 460b5609,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613,PodSandboxId:f4f214e73ec2525d4a6ed2b0a4f16328c717f1110c3d5ce773f0c67603c24bd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402698892453284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12f202cfa4431635b8e608b4139d09ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5,PodSandboxId:4c8572e0a6c42cf4a4f04757b8b3c240f6fddedc7403ae4b04dcb5ca209adc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402698987017299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be66e0dfe0c5d13f7ee475b7a4c8e76b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4db098a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=09b27a5c-ef90-4046-a5fa-872ce3ac4113 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a448430e616f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   e704b69630a14       storage-provisioner
	610e49643c614       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   d39a130850a33       coredns-5dd5756b68-thtp4
	145fd37d0c214       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   9 minutes ago       Running             kube-proxy                0                   5d7fbc7f78bd2       kube-proxy-shp9z
	d6041bef5c201       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   9 minutes ago       Running             kube-apiserver            2                   fe825b3fb7a8f       kube-apiserver-embed-certs-253253
	839d5e12d3d5b       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   9 minutes ago       Running             kube-controller-manager   2                   27d4c7691e434       kube-controller-manager-embed-certs-253253
	a932303ed94d4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   4c8572e0a6c42       etcd-embed-certs-253253
	d2f07ee7e14c7       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   9 minutes ago       Running             kube-scheduler            2                   f4f214e73ec25       kube-scheduler-embed-certs-253253
	
	* 
	* ==> coredns [610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48227 - 57258 "HINFO IN 5919024392424834459.2329518990281447896. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015428028s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-253253
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-253253
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=embed-certs-253253
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T00_18_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 00:18:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-253253
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 00:27:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:23:54 +0000   Wed, 08 Nov 2023 00:18:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:23:54 +0000   Wed, 08 Nov 2023 00:18:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:23:54 +0000   Wed, 08 Nov 2023 00:18:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:23:54 +0000   Wed, 08 Nov 2023 00:18:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    embed-certs-253253
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a10ee08b8f4f4452abe24ecfc389bc9c
	  System UUID:                a10ee08b-8f4f-4452-abe2-4ecfc389bc9c
	  Boot ID:                    9f9d89ce-b341-40b8-9f1b-fd7bd7add76a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-thtp4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-embed-certs-253253                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-253253             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-253253    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-shp9z                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-embed-certs-253253             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-f8rk4               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node embed-certs-253253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node embed-certs-253253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node embed-certs-253253 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s  kubelet          Node embed-certs-253253 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s  kubelet          Node embed-certs-253253 status is now: NodeReady
	  Normal  RegisteredNode           9m9s   node-controller  Node embed-certs-253253 event: Registered Node embed-certs-253253 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 8 00:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067340] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.413433] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.604799] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142679] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.461087] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.051861] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.112114] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.153549] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.134252] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.235283] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +17.184635] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +20.677236] kauditd_printk_skb: 34 callbacks suppressed
	[Nov 8 00:18] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.550389] systemd-fstab-generator[3734]: Ignoring "noauto" for root device
	[  +9.808129] systemd-fstab-generator[4059]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5] <==
	* {"level":"info","ts":"2023-11-08T00:18:21.151859Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2023-11-08T00:18:21.153479Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f0ef8018a32f46af","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-11-08T00:18:21.155881Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T00:18:21.155947Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T00:18:21.155971Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T00:18:21.158965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af switched to configuration voters=(17361235931841906351)"}
	{"level":"info","ts":"2023-11-08T00:18:21.159095Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","added-peer-id":"f0ef8018a32f46af","added-peer-peer-urls":["https://192.168.39.159:2380"]}
	{"level":"info","ts":"2023-11-08T00:18:21.495785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-08T00:18:21.495854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-08T00:18:21.495872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgPreVoteResp from f0ef8018a32f46af at term 1"}
	{"level":"info","ts":"2023-11-08T00:18:21.4959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became candidate at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:21.495905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgVoteResp from f0ef8018a32f46af at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:21.495914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became leader at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:21.495921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0ef8018a32f46af elected leader f0ef8018a32f46af at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:21.500045Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f0ef8018a32f46af","local-member-attributes":"{Name:embed-certs-253253 ClientURLs:[https://192.168.39.159:2379]}","request-path":"/0/members/f0ef8018a32f46af/attributes","cluster-id":"bc02953927cca850","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T00:18:21.50027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:21.500819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:21.506069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T00:18:21.506854Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:21.506972Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:21.507109Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.159:2379"}
	{"level":"info","ts":"2023-11-08T00:18:21.507272Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:21.532573Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:21.532871Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:21.532954Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  00:27:48 up 14 min,  0 users,  load average: 0.12, 0.24, 0.24
	Linux embed-certs-253253 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806] <==
	* I1108 00:25:23.444277       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 00:25:23.682763       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:25:33.683870       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:25:43.684433       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:25:53.685080       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:26:03.685932       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:26:13.686922       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	I1108 00:26:23.444899       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 00:26:23.688182       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	W1108 00:26:24.556774       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:26:24.556849       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:26:24.556857       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:26:24.557974       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:26:24.558141       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:26:24.558174       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1108 00:26:33.689875       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:26:43.690444       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:26:53.691589       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:27:03.693020       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:27:13.693926       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	I1108 00:27:23.445048       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 00:27:23.695214       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:27:33.695830       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:27:43.697524       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	
	* 
	* ==> kube-controller-manager [839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2] <==
	* I1108 00:22:10.315921       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:22:39.872860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:22:40.326924       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:23:09.879973       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:23:10.336287       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:23:39.886410       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:23:40.345413       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:24:09.891625       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:24:10.354989       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:24:39.450104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="396.107µs"
	E1108 00:24:39.898503       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:24:40.364189       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:24:50.442103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="116.844µs"
	E1108 00:25:09.905016       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:25:10.373893       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:25:39.911074       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:25:40.394125       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:26:09.918040       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:26:10.403355       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:26:39.923958       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:26:40.412081       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:27:09.929277       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:27:10.421359       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:27:39.936188       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:27:40.430580       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9] <==
	* I1108 00:18:42.179931       1 server_others.go:69] "Using iptables proxy"
	I1108 00:18:42.219908       1 node.go:141] Successfully retrieved node IP: 192.168.39.159
	I1108 00:18:42.392604       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 00:18:42.392649       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 00:18:42.416538       1 server_others.go:152] "Using iptables Proxier"
	I1108 00:18:42.416661       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 00:18:42.416914       1 server.go:846] "Version info" version="v1.28.3"
	I1108 00:18:42.416924       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:18:42.426618       1 config.go:188] "Starting service config controller"
	I1108 00:18:42.427411       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 00:18:42.427444       1 config.go:97] "Starting endpoint slice config controller"
	I1108 00:18:42.427450       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 00:18:42.438847       1 config.go:315] "Starting node config controller"
	I1108 00:18:42.439832       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 00:18:42.530664       1 shared_informer.go:318] Caches are synced for service config
	I1108 00:18:42.531037       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 00:18:42.546914       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613] <==
	* W1108 00:18:23.672580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:18:23.674815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 00:18:23.672886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 00:18:23.672900       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:23.675141       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:23.675164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 00:18:24.525289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:18:24.525430       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 00:18:24.530357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:24.530425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:24.650381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:24.650448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:24.673433       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 00:18:24.673483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1108 00:18:24.773445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:18:24.773531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 00:18:24.796652       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:24.796819       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 00:18:24.856914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:24.857037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:24.900015       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:18:24.900127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 00:18:24.948304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1108 00:18:24.948413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1108 00:18:27.952552       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 00:13:11 UTC, ends at Wed 2023-11-08 00:27:48 UTC. --
	Nov 08 00:25:02 embed-certs-253253 kubelet[4066]: E1108 00:25:02.424481    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:25:13 embed-certs-253253 kubelet[4066]: E1108 00:25:13.425379    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:25:27 embed-certs-253253 kubelet[4066]: E1108 00:25:27.524255    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:25:27 embed-certs-253253 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:25:27 embed-certs-253253 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:25:27 embed-certs-253253 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:25:28 embed-certs-253253 kubelet[4066]: E1108 00:25:28.424868    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:25:39 embed-certs-253253 kubelet[4066]: E1108 00:25:39.424864    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:25:51 embed-certs-253253 kubelet[4066]: E1108 00:25:51.425318    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:26:05 embed-certs-253253 kubelet[4066]: E1108 00:26:05.424614    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:26:20 embed-certs-253253 kubelet[4066]: E1108 00:26:20.425336    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:26:27 embed-certs-253253 kubelet[4066]: E1108 00:26:27.517279    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:26:27 embed-certs-253253 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:26:27 embed-certs-253253 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:26:27 embed-certs-253253 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:26:32 embed-certs-253253 kubelet[4066]: E1108 00:26:32.424255    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:26:45 embed-certs-253253 kubelet[4066]: E1108 00:26:45.427846    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:26:56 embed-certs-253253 kubelet[4066]: E1108 00:26:56.424463    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:27:09 embed-certs-253253 kubelet[4066]: E1108 00:27:09.424599    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:27:23 embed-certs-253253 kubelet[4066]: E1108 00:27:23.425434    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:27:27 embed-certs-253253 kubelet[4066]: E1108 00:27:27.516914    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:27:27 embed-certs-253253 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:27:27 embed-certs-253253 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:27:27 embed-certs-253253 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:27:34 embed-certs-253253 kubelet[4066]: E1108 00:27:34.425051    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	
	* 
	* ==> storage-provisioner [5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0] <==
	* I1108 00:18:44.198187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 00:18:44.214219       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 00:18:44.214361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 00:18:44.224253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 00:18:44.226326       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bdb638a5-4eef-4712-a557-6b799a37a79b", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-253253_a66df68b-21c0-4eba-863c-c8c2003b7d9a became leader
	I1108 00:18:44.227012       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-253253_a66df68b-21c0-4eba-863c-c8c2003b7d9a!
	I1108 00:18:44.328221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-253253_a66df68b-21c0-4eba-863c-c8c2003b7d9a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-253253 -n embed-certs-253253
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-253253 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-f8rk4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-253253 describe pod metrics-server-57f55c9bc5-f8rk4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-253253 describe pod metrics-server-57f55c9bc5-f8rk4: exit status 1 (75.607074ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-f8rk4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-253253 describe pod metrics-server-57f55c9bc5-f8rk4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1108 00:18:53.871448   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-320390 -n no-preload-320390
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-08 00:27:46.875418779 +0000 UTC m=+5201.038727661
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320390 -n no-preload-320390
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-320390 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-320390 logs -n 25: (1.826768069s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-161055                           | kubernetes-upgrade-161055    | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:04 UTC |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:05 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-590541        | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-320390             | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-253253            | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-560216 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	|         | disable-driver-mounts-560216                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:09 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-590541             | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320390                  | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-253253                 | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-039263  | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-039263       | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:12 UTC | 08 Nov 23 00:19 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:12:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:12:00.921478   51228 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:12:00.921584   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921592   51228 out.go:309] Setting ErrFile to fd 2...
	I1108 00:12:00.921597   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921752   51228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:12:00.922282   51228 out.go:303] Setting JSON to false
	I1108 00:12:00.923151   51228 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6870,"bootTime":1699395451,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:12:00.923210   51228 start.go:138] virtualization: kvm guest
	I1108 00:12:00.925322   51228 out.go:177] * [default-k8s-diff-port-039263] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:12:00.926718   51228 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:12:00.928030   51228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:12:00.926756   51228 notify.go:220] Checking for updates...
	I1108 00:12:00.930659   51228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:12:00.932049   51228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:12:00.933341   51228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:12:00.934394   51228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:12:00.936334   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:00.936806   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.936857   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.950893   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I1108 00:12:00.951284   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.951775   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.951796   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.952131   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.952308   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:00.952537   51228 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:12:00.952850   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.952894   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.966402   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I1108 00:12:00.966726   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.967218   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.967238   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.967525   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.967705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:01.002079   51228 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:12:01.003352   51228 start.go:298] selected driver: kvm2
	I1108 00:12:01.003362   51228 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:def
ault-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.003471   51228 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:12:01.004117   51228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.004197   51228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:12:01.018635   51228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:12:01.018987   51228 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 00:12:01.019047   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:12:01.019060   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:12:01.019072   51228 start_flags.go:323] config:
	{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.019251   51228 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.021306   51228 out.go:177] * Starting control plane node default-k8s-diff-port-039263 in cluster default-k8s-diff-port-039263
	I1108 00:12:00.865093   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:03.937104   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:01.022723   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:12:01.022765   51228 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1108 00:12:01.022777   51228 cache.go:56] Caching tarball of preloaded images
	I1108 00:12:01.022864   51228 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 00:12:01.022875   51228 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1108 00:12:01.022984   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:12:01.023164   51228 start.go:365] acquiring machines lock for default-k8s-diff-port-039263: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:10.017091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:13.089091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:19.169065   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:22.241084   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:28.321050   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:31.393060   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:37.473056   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:40.475708   50505 start.go:369] acquired machines lock for "no-preload-320390" in 3m26.103068871s
	I1108 00:12:40.475773   50505 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:40.475781   50505 fix.go:54] fixHost starting: 
	I1108 00:12:40.476087   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:40.476116   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:40.490309   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45419
	I1108 00:12:40.490708   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:40.491196   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:12:40.491217   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:40.491530   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:40.491718   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:40.491870   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:12:40.493597   50505 fix.go:102] recreateIfNeeded on no-preload-320390: state=Stopped err=<nil>
	I1108 00:12:40.493628   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	W1108 00:12:40.493762   50505 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:40.495670   50505 out.go:177] * Restarting existing kvm2 VM for "no-preload-320390" ...
	I1108 00:12:40.496930   50505 main.go:141] libmachine: (no-preload-320390) Calling .Start
	I1108 00:12:40.497098   50505 main.go:141] libmachine: (no-preload-320390) Ensuring networks are active...
	I1108 00:12:40.497753   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network default is active
	I1108 00:12:40.498094   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network mk-no-preload-320390 is active
	I1108 00:12:40.498442   50505 main.go:141] libmachine: (no-preload-320390) Getting domain xml...
	I1108 00:12:40.499199   50505 main.go:141] libmachine: (no-preload-320390) Creating domain...
	I1108 00:12:41.718179   50505 main.go:141] libmachine: (no-preload-320390) Waiting to get IP...
	I1108 00:12:41.719024   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.719423   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.719497   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.719407   51373 retry.go:31] will retry after 204.819851ms: waiting for machine to come up
	I1108 00:12:41.925924   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.926414   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.926445   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.926361   51373 retry.go:31] will retry after 237.59613ms: waiting for machine to come up
	I1108 00:12:42.165848   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.166251   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.166282   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.166195   51373 retry.go:31] will retry after 306.914093ms: waiting for machine to come up
	I1108 00:12:42.474651   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.475026   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.475057   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.474981   51373 retry.go:31] will retry after 490.427385ms: waiting for machine to come up
	I1108 00:12:42.967292   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.967709   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.967733   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.967661   51373 retry.go:31] will retry after 684.227655ms: waiting for machine to come up
	I1108 00:12:43.653384   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:43.653823   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:43.653847   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:43.653774   51373 retry.go:31] will retry after 640.101868ms: waiting for machine to come up
	I1108 00:12:40.473798   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:40.473838   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:12:40.475605   50022 machine.go:91] provisioned docker machine in 4m37.566672036s
	I1108 00:12:40.475639   50022 fix.go:56] fixHost completed within 4m37.589859084s
	I1108 00:12:40.475644   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 4m37.589890946s
	W1108 00:12:40.475670   50022 start.go:691] error starting host: provision: host is not running
	W1108 00:12:40.475777   50022 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1108 00:12:40.475788   50022 start.go:706] Will try again in 5 seconds ...
	I1108 00:12:44.295060   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:44.295559   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:44.295610   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:44.295506   51373 retry.go:31] will retry after 797.709386ms: waiting for machine to come up
	I1108 00:12:45.095135   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:45.095552   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:45.095575   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:45.095476   51373 retry.go:31] will retry after 1.052157242s: waiting for machine to come up
	I1108 00:12:46.149040   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:46.149393   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:46.149426   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:46.149336   51373 retry.go:31] will retry after 1.246701556s: waiting for machine to come up
	I1108 00:12:47.397579   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:47.397942   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:47.397981   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:47.397900   51373 retry.go:31] will retry after 1.742754262s: waiting for machine to come up
	I1108 00:12:49.142995   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:49.143390   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:49.143419   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:49.143349   51373 retry.go:31] will retry after 2.412997156s: waiting for machine to come up
	I1108 00:12:45.476072   50022 start.go:365] acquiring machines lock for old-k8s-version-590541: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:51.558471   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:51.558857   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:51.558880   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:51.558809   51373 retry.go:31] will retry after 3.169873944s: waiting for machine to come up
	I1108 00:12:54.732010   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:54.732320   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:54.732340   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:54.732292   51373 retry.go:31] will retry after 3.452837487s: waiting for machine to come up
	I1108 00:12:58.188516   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.188983   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has current primary IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.189014   50505 main.go:141] libmachine: (no-preload-320390) Found IP for machine: 192.168.61.176
	I1108 00:12:58.189036   50505 main.go:141] libmachine: (no-preload-320390) Reserving static IP address...
	I1108 00:12:58.189332   50505 main.go:141] libmachine: (no-preload-320390) Reserved static IP address: 192.168.61.176
	I1108 00:12:58.189364   50505 main.go:141] libmachine: (no-preload-320390) Waiting for SSH to be available...
	I1108 00:12:58.189388   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.189415   50505 main.go:141] libmachine: (no-preload-320390) DBG | skip adding static IP to network mk-no-preload-320390 - found existing host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"}
	I1108 00:12:58.189432   50505 main.go:141] libmachine: (no-preload-320390) DBG | Getting to WaitForSSH function...
	I1108 00:12:58.191264   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191565   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.191598   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191730   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH client type: external
	I1108 00:12:58.191760   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa (-rw-------)
	I1108 00:12:58.191794   50505 main.go:141] libmachine: (no-preload-320390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:12:58.191808   50505 main.go:141] libmachine: (no-preload-320390) DBG | About to run SSH command:
	I1108 00:12:58.191819   50505 main.go:141] libmachine: (no-preload-320390) DBG | exit 0
	I1108 00:12:58.284621   50505 main.go:141] libmachine: (no-preload-320390) DBG | SSH cmd err, output: <nil>: 
	I1108 00:12:58.284983   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetConfigRaw
	I1108 00:12:58.285600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.287966   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288289   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.288325   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288532   50505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/config.json ...
	I1108 00:12:58.288712   50505 machine.go:88] provisioning docker machine ...
	I1108 00:12:58.288732   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:58.288917   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289074   50505 buildroot.go:166] provisioning hostname "no-preload-320390"
	I1108 00:12:58.289097   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289217   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.291053   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291329   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.291358   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291460   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.291613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291749   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291849   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.292009   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.292394   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.292419   50505 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320390 && echo "no-preload-320390" | sudo tee /etc/hostname
	I1108 00:12:58.433310   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320390
	
	I1108 00:12:58.433333   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.435959   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436351   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.436383   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436531   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.436710   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436853   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436959   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.437088   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.437607   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.437633   50505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320390/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:12:58.578473   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:58.578506   50505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:12:58.578568   50505 buildroot.go:174] setting up certificates
	I1108 00:12:58.578582   50505 provision.go:83] configureAuth start
	I1108 00:12:58.578600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.578889   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.581534   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581857   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.581881   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581948   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.583777   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584002   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.584023   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584121   50505 provision.go:138] copyHostCerts
	I1108 00:12:58.584172   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:12:58.584184   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:12:58.584247   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:12:58.584327   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:12:58.584337   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:12:58.584359   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:12:58.584407   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:12:58.584415   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:12:58.584434   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:12:58.584473   50505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-320390 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-320390]
	I1108 00:12:58.785035   50505 provision.go:172] copyRemoteCerts
	I1108 00:12:58.785095   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:12:58.785127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.787683   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788001   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.788037   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788194   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.788363   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.788534   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.788678   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:58.881791   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:12:58.905314   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:12:58.928183   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:12:58.951053   50505 provision.go:86] duration metric: configureAuth took 372.456375ms
	I1108 00:12:58.951079   50505 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:12:58.951288   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:58.951368   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.953851   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954158   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.954182   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954309   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.954504   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954689   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.954964   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.955269   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.955283   50505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:12:59.265311   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:12:59.265342   50505 machine.go:91] provisioned docker machine in 976.618103ms
	I1108 00:12:59.265353   50505 start.go:300] post-start starting for "no-preload-320390" (driver="kvm2")
	I1108 00:12:59.265362   50505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:12:59.265377   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.265683   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:12:59.265721   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.533994   50613 start.go:369] acquired machines lock for "embed-certs-253253" in 3m37.489465904s
	I1108 00:12:59.534047   50613 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:59.534093   50613 fix.go:54] fixHost starting: 
	I1108 00:12:59.534485   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:59.534531   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:59.553784   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I1108 00:12:59.554193   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:59.554676   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:12:59.554702   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:59.555006   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:59.555188   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:12:59.555320   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:12:59.556783   50613 fix.go:102] recreateIfNeeded on embed-certs-253253: state=Stopped err=<nil>
	I1108 00:12:59.556804   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	W1108 00:12:59.556989   50613 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:59.558834   50613 out.go:177] * Restarting existing kvm2 VM for "embed-certs-253253" ...
	I1108 00:12:59.268378   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268792   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.268836   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268991   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.269175   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.269337   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.269480   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.363687   50505 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:12:59.368009   50505 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:12:59.368028   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:12:59.368087   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:12:59.368176   50505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:12:59.368287   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:12:59.377685   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:12:59.399143   50505 start.go:303] post-start completed in 133.780055ms
	I1108 00:12:59.399161   50505 fix.go:56] fixHost completed within 18.923380073s
	I1108 00:12:59.399178   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.401608   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.401977   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.402007   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.402127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.402315   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402471   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402650   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.402824   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:59.403150   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:59.403162   50505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:12:59.533831   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402379.481958632
	
	I1108 00:12:59.533852   50505 fix.go:206] guest clock: 1699402379.481958632
	I1108 00:12:59.533859   50505 fix.go:219] Guest: 2023-11-08 00:12:59.481958632 +0000 UTC Remote: 2023-11-08 00:12:59.399164235 +0000 UTC m=+225.183083525 (delta=82.794397ms)
	I1108 00:12:59.533876   50505 fix.go:190] guest clock delta is within tolerance: 82.794397ms
	I1108 00:12:59.533880   50505 start.go:83] releasing machines lock for "no-preload-320390", held for 19.058127295s
	I1108 00:12:59.533902   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.534171   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:59.537173   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537616   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.537665   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537736   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538230   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538431   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538517   50505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:12:59.538613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.538659   50505 ssh_runner.go:195] Run: cat /version.json
	I1108 00:12:59.538683   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.541051   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541283   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541438   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541463   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541599   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541608   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541634   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541775   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.541845   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541939   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.541997   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.542078   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.542093   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.542265   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.637947   50505 ssh_runner.go:195] Run: systemctl --version
	I1108 00:12:59.660255   50505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:12:59.809407   50505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:12:59.816246   50505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:12:59.816323   50505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:12:59.831564   50505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:12:59.831586   50505 start.go:472] detecting cgroup driver to use...
	I1108 00:12:59.831651   50505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:12:59.847556   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:12:59.861077   50505 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:12:59.861143   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:12:59.876764   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:12:59.890894   50505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:00.001947   50505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:00.121923   50505 docker.go:219] disabling docker service ...
	I1108 00:13:00.122000   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:00.135525   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:00.148130   50505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:00.259318   50505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:00.368101   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:00.381138   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:00.398173   50505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:00.398245   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.407655   50505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:00.407699   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.416919   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.425767   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.434447   50505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:00.443679   50505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:00.451581   50505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:00.451619   50505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:00.464498   50505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:00.474332   50505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:00.599521   50505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:00.770248   50505 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:00.770341   50505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:00.775707   50505 start.go:540] Will wait 60s for crictl version
	I1108 00:13:00.775768   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:00.779578   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:00.821230   50505 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:00.821320   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.872851   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.920420   50505 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:12:59.560111   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Start
	I1108 00:12:59.560287   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring networks are active...
	I1108 00:12:59.561030   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network default is active
	I1108 00:12:59.561390   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network mk-embed-certs-253253 is active
	I1108 00:12:59.561717   50613 main.go:141] libmachine: (embed-certs-253253) Getting domain xml...
	I1108 00:12:59.562287   50613 main.go:141] libmachine: (embed-certs-253253) Creating domain...
	I1108 00:13:00.806061   50613 main.go:141] libmachine: (embed-certs-253253) Waiting to get IP...
	I1108 00:13:00.806862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:00.807268   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:00.807340   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:00.807226   51493 retry.go:31] will retry after 261.179966ms: waiting for machine to come up
	I1108 00:13:01.069535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.070048   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.070078   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.069997   51493 retry.go:31] will retry after 302.795302ms: waiting for machine to come up
	I1108 00:13:01.374567   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.375094   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.375119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.375043   51493 retry.go:31] will retry after 303.804523ms: waiting for machine to come up
	I1108 00:13:01.680374   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.680698   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.680726   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.680660   51493 retry.go:31] will retry after 446.122126ms: waiting for machine to come up
	I1108 00:13:00.921979   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:13:00.924760   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925121   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:13:00.925148   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925370   50505 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:00.929750   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:00.941338   50505 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:00.941372   50505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:00.979343   50505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:00.979370   50505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:13:00.979489   50505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.979539   50505 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.979636   50505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:00.979477   50505 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.979515   50505 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.979516   50505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980609   50505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.980677   50505 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.980704   50505 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.980733   50505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.980949   50505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980994   50505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.126154   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.131334   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.141929   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.150051   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.178472   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.198519   50505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1108 00:13:01.198569   50505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.198628   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.214419   50505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1108 00:13:01.214470   50505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.214527   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249270   50505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1108 00:13:01.249316   50505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.249321   50505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1108 00:13:01.249354   50505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.249363   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249398   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.257971   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1108 00:13:01.268557   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.279207   50505 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1108 00:13:01.279254   50505 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.279255   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.279295   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.279304   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.279365   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.279492   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.477649   50505 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1108 00:13:01.477691   50505 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.477740   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.477782   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1108 00:13:01.477963   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1108 00:13:01.478038   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1108 00:13:01.478005   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.478079   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.478116   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:01.478121   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:01.489810   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.490983   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1108 00:13:01.491011   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.491049   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.490984   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1108 00:13:01.556911   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1108 00:13:01.556996   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1108 00:13:01.557036   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:01.557048   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1108 00:13:01.576123   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1108 00:13:01.576251   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:02.001052   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:02.127888   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.128302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.128333   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.128247   51493 retry.go:31] will retry after 498.0349ms: waiting for machine to come up
	I1108 00:13:02.627872   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.628339   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.628373   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.628296   51493 retry.go:31] will retry after 852.947554ms: waiting for machine to come up
	I1108 00:13:03.483507   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:03.484074   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:03.484119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:03.484024   51493 retry.go:31] will retry after 1.040831469s: waiting for machine to come up
	I1108 00:13:04.526186   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:04.526503   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:04.526535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:04.526446   51493 retry.go:31] will retry after 960.701342ms: waiting for machine to come up
	I1108 00:13:05.489041   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:05.489473   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:05.489509   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:05.489456   51493 retry.go:31] will retry after 1.729813733s: waiting for machine to come up
	I1108 00:13:04.536381   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.045307892s)
	I1108 00:13:04.536412   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1108 00:13:04.536439   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536453   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (2.979392017s)
	I1108 00:13:04.536485   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1108 00:13:04.536491   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536531   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (2.960264305s)
	I1108 00:13:04.536549   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1108 00:13:04.536590   50505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.535505624s)
	I1108 00:13:04.536622   50505 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1108 00:13:04.536652   50505 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:04.536694   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:07.220832   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.68430655s)
	I1108 00:13:07.220863   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1108 00:13:07.220898   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.220902   50505 ssh_runner.go:235] Completed: which crictl: (2.684187653s)
	I1108 00:13:07.220982   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.221015   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:08.593275   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.372272111s)
	I1108 00:13:08.593311   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1108 00:13:08.593326   50505 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.372286228s)
	I1108 00:13:08.593374   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 00:13:08.593338   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:08.593474   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:08.593479   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:07.221541   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:07.221969   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:07.221998   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:07.221953   51493 retry.go:31] will retry after 1.97898588s: waiting for machine to come up
	I1108 00:13:09.202332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:09.202803   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:09.202831   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:09.202756   51493 retry.go:31] will retry after 2.565503631s: waiting for machine to come up
	I1108 00:13:11.769857   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:11.770332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:11.770354   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:11.770292   51493 retry.go:31] will retry after 3.236419831s: waiting for machine to come up
	I1108 00:13:10.382696   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.789194848s)
	I1108 00:13:10.382726   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1108 00:13:10.382747   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.789249445s)
	I1108 00:13:10.382776   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1108 00:13:10.382752   50505 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:10.382828   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:11.846184   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.463326325s)
	I1108 00:13:11.846222   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1108 00:13:11.846254   50505 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:11.846322   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:15.008441   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:15.008899   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:15.008936   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:15.008860   51493 retry.go:31] will retry after 3.079379099s: waiting for machine to come up
	I1108 00:13:19.138865   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.292505697s)
	I1108 00:13:19.138899   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1108 00:13:19.138926   50505 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.138987   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.465800   51228 start.go:369] acquired machines lock for "default-k8s-diff-port-039263" in 1m18.442604828s
	I1108 00:13:19.465853   51228 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:19.465863   51228 fix.go:54] fixHost starting: 
	I1108 00:13:19.466321   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:19.466373   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:19.485614   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I1108 00:13:19.486012   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:19.486457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:13:19.486478   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:19.486839   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:19.487016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:19.487158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:13:19.488697   51228 fix.go:102] recreateIfNeeded on default-k8s-diff-port-039263: state=Stopped err=<nil>
	I1108 00:13:19.488733   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	W1108 00:13:19.488889   51228 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:19.490913   51228 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-039263" ...
	I1108 00:13:19.492333   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Start
	I1108 00:13:19.492481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring networks are active...
	I1108 00:13:19.493162   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network default is active
	I1108 00:13:19.493592   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network mk-default-k8s-diff-port-039263 is active
	I1108 00:13:19.494016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Getting domain xml...
	I1108 00:13:19.494668   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Creating domain...
	I1108 00:13:20.910918   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting to get IP...
	I1108 00:13:20.911948   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912423   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912517   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:20.912403   51635 retry.go:31] will retry after 265.914494ms: waiting for machine to come up
	I1108 00:13:18.092086   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092516   50613 main.go:141] libmachine: (embed-certs-253253) Found IP for machine: 192.168.39.159
	I1108 00:13:18.092544   50613 main.go:141] libmachine: (embed-certs-253253) Reserving static IP address...
	I1108 00:13:18.092568   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has current primary IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092947   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.092980   50613 main.go:141] libmachine: (embed-certs-253253) DBG | skip adding static IP to network mk-embed-certs-253253 - found existing host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"}
	I1108 00:13:18.092999   50613 main.go:141] libmachine: (embed-certs-253253) Reserved static IP address: 192.168.39.159
	I1108 00:13:18.093019   50613 main.go:141] libmachine: (embed-certs-253253) Waiting for SSH to be available...
	I1108 00:13:18.093036   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Getting to WaitForSSH function...
	I1108 00:13:18.094941   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.095311   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095472   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH client type: external
	I1108 00:13:18.095487   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa (-rw-------)
	I1108 00:13:18.095519   50613 main.go:141] libmachine: (embed-certs-253253) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:18.095535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | About to run SSH command:
	I1108 00:13:18.095545   50613 main.go:141] libmachine: (embed-certs-253253) DBG | exit 0
	I1108 00:13:18.184364   50613 main.go:141] libmachine: (embed-certs-253253) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:18.184700   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetConfigRaw
	I1108 00:13:18.264914   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.267404   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267716   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.267752   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267951   50613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/config.json ...
	I1108 00:13:18.268153   50613 machine.go:88] provisioning docker machine ...
	I1108 00:13:18.268171   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:18.268382   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268642   50613 buildroot.go:166] provisioning hostname "embed-certs-253253"
	I1108 00:13:18.268662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268783   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.270977   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271275   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.271302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271485   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.271683   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.271873   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.272021   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.272185   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.272549   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.272564   50613 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-253253 && echo "embed-certs-253253" | sudo tee /etc/hostname
	I1108 00:13:18.408618   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253253
	
	I1108 00:13:18.408655   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.411325   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411629   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.411673   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411793   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.412024   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412204   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412353   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.412513   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.412864   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.412884   50613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-253253' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-253253/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-253253' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:18.537585   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:18.537611   50613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:18.537628   50613 buildroot.go:174] setting up certificates
	I1108 00:13:18.537636   50613 provision.go:83] configureAuth start
	I1108 00:13:18.537644   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.537930   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.540544   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.540937   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.540966   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.541078   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.543184   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543455   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.543486   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543559   50613 provision.go:138] copyHostCerts
	I1108 00:13:18.543621   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:18.543639   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:18.543692   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:18.543793   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:18.543801   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:18.543823   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:18.543876   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:18.543884   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:18.543900   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:18.543962   50613 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-253253 san=[192.168.39.159 192.168.39.159 localhost 127.0.0.1 minikube embed-certs-253253]
	I1108 00:13:18.707824   50613 provision.go:172] copyRemoteCerts
	I1108 00:13:18.707880   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:18.707905   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.710820   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711181   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.711208   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.711642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.711815   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.711973   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:18.803200   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:18.827267   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:13:18.850710   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:18.876752   50613 provision.go:86] duration metric: configureAuth took 339.103407ms
	I1108 00:13:18.876781   50613 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:18.876987   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:18.877075   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.879751   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880121   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.880149   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880331   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.880501   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880649   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880772   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.880929   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.881240   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.881257   50613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:19.199987   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:19.200012   50613 machine.go:91] provisioned docker machine in 931.846262ms
	I1108 00:13:19.200023   50613 start.go:300] post-start starting for "embed-certs-253253" (driver="kvm2")
	I1108 00:13:19.200035   50613 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:19.200057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.200377   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:19.200409   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.203230   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203610   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.203644   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203771   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.203963   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.204118   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.204231   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.297991   50613 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:19.303630   50613 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:19.303655   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:19.303721   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:19.303831   50613 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:19.303956   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:19.315605   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:19.339647   50613 start.go:303] post-start completed in 139.611237ms
	I1108 00:13:19.339665   50613 fix.go:56] fixHost completed within 19.805611247s
	I1108 00:13:19.339687   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.342291   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342623   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.342648   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342838   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.343019   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343147   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343323   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.343483   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:19.343856   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:19.343868   50613 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:19.465645   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402399.415738784
	
	I1108 00:13:19.465670   50613 fix.go:206] guest clock: 1699402399.415738784
	I1108 00:13:19.465681   50613 fix.go:219] Guest: 2023-11-08 00:13:19.415738784 +0000 UTC Remote: 2023-11-08 00:13:19.339668655 +0000 UTC m=+237.442917453 (delta=76.070129ms)
	I1108 00:13:19.465704   50613 fix.go:190] guest clock delta is within tolerance: 76.070129ms
	I1108 00:13:19.465710   50613 start.go:83] releasing machines lock for "embed-certs-253253", held for 19.931686858s
	I1108 00:13:19.465738   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.465996   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:19.468862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469185   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.469223   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469365   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.469898   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470091   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470174   50613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:19.470215   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.470300   50613 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:19.470321   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.473140   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473517   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473562   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473594   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473612   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473777   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473843   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474004   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474007   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474153   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.474192   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474344   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.565638   50613 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:19.591686   50613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:19.747192   50613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:19.755053   50613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:19.755134   50613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:19.774522   50613 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:19.774551   50613 start.go:472] detecting cgroup driver to use...
	I1108 00:13:19.774652   50613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:19.795492   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:19.809888   50613 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:19.809958   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:19.823108   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:19.835588   50613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:19.940017   50613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:20.075405   50613 docker.go:219] disabling docker service ...
	I1108 00:13:20.075460   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:20.090949   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:20.103551   50613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:20.226887   50613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:20.352088   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:20.367626   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:20.388084   50613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:20.388153   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.398506   50613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:20.398573   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.408335   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.417991   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.427599   50613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:20.439537   50613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:20.450914   50613 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:20.450972   50613 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:20.464456   50613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:20.475133   50613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:20.586162   50613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:20.799540   50613 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:20.799615   50613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:20.808503   50613 start.go:540] Will wait 60s for crictl version
	I1108 00:13:20.808551   50613 ssh_runner.go:195] Run: which crictl
	I1108 00:13:20.812371   50613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:20.853073   50613 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:20.853166   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.904737   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.958281   50613 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:20.959792   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:20.962399   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.962740   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:20.962775   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.963037   50613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:20.967403   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.980199   50613 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:20.980261   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:21.024679   50613 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:21.024757   50613 ssh_runner.go:195] Run: which lz4
	I1108 00:13:21.028861   50613 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:21.032736   50613 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:21.032762   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:19.898602   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1108 00:13:19.898655   50505 cache_images.go:123] Successfully loaded all cached images
	I1108 00:13:19.898663   50505 cache_images.go:92] LoadImages completed in 18.919280882s
	I1108 00:13:19.898742   50505 ssh_runner.go:195] Run: crio config
	I1108 00:13:19.970909   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:19.970936   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:19.970958   50505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:19.970986   50505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320390 NodeName:no-preload-320390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:19.971171   50505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320390"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:19.971273   50505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-320390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:19.971347   50505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:19.984469   50505 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:19.984551   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:19.995491   50505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1108 00:13:20.013609   50505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:20.031507   50505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1108 00:13:20.051978   50505 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:20.057139   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.071438   50505 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390 for IP: 192.168.61.176
	I1108 00:13:20.071471   50505 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:20.071635   50505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:20.071691   50505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:20.071782   50505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.key
	I1108 00:13:20.071848   50505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key.492ad1cf
	I1108 00:13:20.071899   50505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key
	I1108 00:13:20.072026   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:20.072064   50505 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:20.072080   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:20.072130   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:20.072167   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:20.072205   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:20.072260   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:20.073092   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:20.099422   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:20.126257   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:20.153126   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:20.184849   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:20.215515   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:20.247686   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:20.277712   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:20.304438   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:20.330321   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:20.361411   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:20.390456   50505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:20.410634   50505 ssh_runner.go:195] Run: openssl version
	I1108 00:13:20.418597   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:20.431853   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438127   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438271   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.445644   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:20.456959   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:20.466413   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472311   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472365   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.477965   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:20.487454   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:20.496731   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502531   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502591   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.509683   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:20.520960   50505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:20.525545   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:20.531367   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:20.537422   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:20.543607   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:20.548942   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:20.554419   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:20.559633   50505 kubeadm.go:404] StartCluster: {Name:no-preload-320390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:20.559719   50505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:20.559766   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:20.603718   50505 cri.go:89] found id: ""
	I1108 00:13:20.603795   50505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:20.613389   50505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:20.613418   50505 kubeadm.go:636] restartCluster start
	I1108 00:13:20.613476   50505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:20.622276   50505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.623645   50505 kubeconfig.go:92] found "no-preload-320390" server: "https://192.168.61.176:8443"
	I1108 00:13:20.626874   50505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:20.638188   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.638238   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.649536   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.649553   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.649610   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.660145   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.160858   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.160936   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.174163   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.660441   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.660526   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.675795   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.160281   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.160358   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.175777   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.660249   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.660328   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.675747   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.160280   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.160360   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.174686   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.661260   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.661343   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.675936   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.160440   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.160558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.174501   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.180066   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180534   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.180492   51635 retry.go:31] will retry after 320.996627ms: waiting for machine to come up
	I1108 00:13:21.503202   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503750   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.503689   51635 retry.go:31] will retry after 431.944242ms: waiting for machine to come up
	I1108 00:13:21.937564   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938025   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.937972   51635 retry.go:31] will retry after 592.354358ms: waiting for machine to come up
	I1108 00:13:22.531850   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:22.532272   51635 retry.go:31] will retry after 589.753727ms: waiting for machine to come up
	I1108 00:13:23.124275   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124784   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.124746   51635 retry.go:31] will retry after 596.910282ms: waiting for machine to come up
	I1108 00:13:23.722967   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723389   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723419   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.723349   51635 retry.go:31] will retry after 793.320391ms: waiting for machine to come up
	I1108 00:13:24.518525   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518953   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518985   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:24.518914   51635 retry.go:31] will retry after 1.247294281s: waiting for machine to come up
	I1108 00:13:25.768137   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768634   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:25.768541   51635 retry.go:31] will retry after 1.468389149s: waiting for machine to come up
	I1108 00:13:22.802292   50613 crio.go:444] Took 1.773480 seconds to copy over tarball
	I1108 00:13:22.802374   50613 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:25.811996   50613 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009592787s)
	I1108 00:13:25.812027   50613 crio.go:451] Took 3.009706 seconds to extract the tarball
	I1108 00:13:25.812036   50613 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:25.852011   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:25.903032   50613 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:25.903055   50613 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:25.903160   50613 ssh_runner.go:195] Run: crio config
	I1108 00:13:25.964562   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:25.964585   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:25.964601   50613 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:25.964618   50613 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-253253 NodeName:embed-certs-253253 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:25.964768   50613 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-253253"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:25.964869   50613 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-253253 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:25.964931   50613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:25.973956   50613 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:25.974031   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:25.982070   50613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 00:13:26.001066   50613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:26.020258   50613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1108 00:13:26.039418   50613 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:26.043133   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:26.055865   50613 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253 for IP: 192.168.39.159
	I1108 00:13:26.055902   50613 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:26.056069   50613 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:26.056268   50613 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:26.056374   50613 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/client.key
	I1108 00:13:26.128533   50613 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key.b15c5797
	I1108 00:13:26.128666   50613 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key
	I1108 00:13:26.128842   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:26.128884   50613 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:26.128895   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:26.128930   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:26.128953   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:26.128975   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:26.129016   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:26.129621   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:26.153776   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:26.179006   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:26.202199   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:26.225241   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:26.247745   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:26.270546   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:26.297075   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:26.320835   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:26.344068   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:26.367085   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:26.391491   50613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:26.408055   50613 ssh_runner.go:195] Run: openssl version
	I1108 00:13:26.413824   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:26.423666   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428281   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428332   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.433901   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:26.443832   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:26.453722   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458290   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458341   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.464035   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:26.473908   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:26.483600   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488053   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488113   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.493571   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:26.503466   50613 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:26.508047   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:26.514165   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:26.520278   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:26.526421   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:26.532388   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:26.538323   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:26.544215   50613 kubeadm.go:404] StartCluster: {Name:embed-certs-253253 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:26.544287   50613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:26.544330   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:26.586501   50613 cri.go:89] found id: ""
	I1108 00:13:26.586578   50613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:26.596647   50613 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:26.596676   50613 kubeadm.go:636] restartCluster start
	I1108 00:13:26.596734   50613 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:26.605901   50613 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.607305   50613 kubeconfig.go:92] found "embed-certs-253253" server: "https://192.168.39.159:8443"
	I1108 00:13:26.610434   50613 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:26.619238   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.619291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.630724   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.630746   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.630787   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.641997   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.660263   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.660349   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.675197   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.160678   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.160774   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.172593   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.660613   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.660696   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.672242   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.160884   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.160978   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.174734   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.660269   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.660337   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.671721   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.160250   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.160344   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.171104   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.660667   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.660729   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.671899   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.160408   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.160471   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.170733   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.660264   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.660338   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.671482   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.161084   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.161163   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.172174   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.238049   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238487   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238518   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:27.238428   51635 retry.go:31] will retry after 1.602246301s: waiting for machine to come up
	I1108 00:13:28.842785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843235   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843259   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:28.843188   51635 retry.go:31] will retry after 2.218327688s: waiting for machine to come up
	I1108 00:13:27.142567   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.242647   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.256767   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.642212   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.642306   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.654185   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.142751   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.142832   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.154141   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.642738   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.642817   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.654476   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.143085   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.143168   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.154553   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.642422   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.642499   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.658048   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.142497   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.142568   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.153710   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.642216   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.642291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.658036   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.142547   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.142634   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.159124   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.642720   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.642810   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.654593   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.660882   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.660944   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.675528   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.161058   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.161121   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.171493   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.638722   50505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:30.638762   50505 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:30.638776   50505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:30.638825   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:30.677982   50505 cri.go:89] found id: ""
	I1108 00:13:30.678064   50505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:30.693650   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:30.702679   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:30.702757   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711179   50505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711212   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:30.843638   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:31.970868   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.127188218s)
	I1108 00:13:31.970904   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.167903   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.242076   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.324914   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:32.325001   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.342576   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.861296   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.360958   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.861308   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:31.062973   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063465   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:31.063370   51635 retry.go:31] will retry after 2.935881965s: waiting for machine to come up
	I1108 00:13:34.002009   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002456   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:34.002385   51635 retry.go:31] will retry after 2.918632194s: waiting for machine to come up
	I1108 00:13:32.142573   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.142668   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.156513   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:32.643129   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.643203   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.654790   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.143023   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.143114   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.159475   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.642631   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.642728   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.658632   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.142142   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.142218   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.158375   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.642356   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.642437   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.657692   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.142180   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.142276   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.157616   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.642121   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.642194   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.656642   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.142162   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:36.142270   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:36.153340   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.619909   50613 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:36.619941   50613 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:36.619958   50613 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:36.620035   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:36.656935   50613 cri.go:89] found id: ""
	I1108 00:13:36.657008   50613 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:36.671784   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:36.680073   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:36.680120   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688560   50613 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688575   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:36.802484   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:34.361558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.860720   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.881793   50505 api_server.go:72] duration metric: took 2.55688905s to wait for apiserver process to appear ...
	I1108 00:13:34.881823   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:34.881843   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.396447   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.396488   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.396503   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.471135   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.471165   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.971845   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.977126   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:38.977163   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.472030   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.477778   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:39.477810   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.971333   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.977224   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:13:39.987415   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:39.987446   50505 api_server.go:131] duration metric: took 5.10561478s to wait for apiserver health ...
	I1108 00:13:39.987456   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:39.987465   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:39.989270   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:36.922427   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922874   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922916   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:36.922824   51635 retry.go:31] will retry after 3.960656744s: waiting for machine to come up
	I1108 00:13:40.886022   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Found IP for machine: 192.168.72.116
	I1108 00:13:40.886591   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has current primary IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886601   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserving static IP address...
	I1108 00:13:40.886974   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.887012   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | skip adding static IP to network mk-default-k8s-diff-port-039263 - found existing host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"}
	I1108 00:13:40.887037   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Getting to WaitForSSH function...
	I1108 00:13:40.887058   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserved static IP address: 192.168.72.116
	I1108 00:13:40.887072   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for SSH to be available...
	I1108 00:13:40.889373   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889771   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.889803   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889991   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH client type: external
	I1108 00:13:40.890018   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa (-rw-------)
	I1108 00:13:40.890054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:40.890068   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | About to run SSH command:
	I1108 00:13:40.890082   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | exit 0
	I1108 00:13:37.573684   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.781978   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.863424   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.935306   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:37.935377   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:37.947059   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.458806   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.959076   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.459045   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.959244   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.458249   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.480623   50613 api_server.go:72] duration metric: took 2.545315304s to wait for apiserver process to appear ...
	I1108 00:13:40.480650   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:40.480668   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:42.285976   50022 start.go:369] acquired machines lock for "old-k8s-version-590541" in 56.809842177s
	I1108 00:13:42.286028   50022 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:42.286039   50022 fix.go:54] fixHost starting: 
	I1108 00:13:42.286455   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:42.286492   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:42.305869   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I1108 00:13:42.306363   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:42.306845   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:13:42.306871   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:42.307221   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:42.307548   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:13:42.307740   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:13:42.309513   50022 fix.go:102] recreateIfNeeded on old-k8s-version-590541: state=Stopped err=<nil>
	I1108 00:13:42.309539   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	W1108 00:13:42.309706   50022 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:42.311819   50022 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-590541" ...
	I1108 00:13:40.997357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:40.997688   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetConfigRaw
	I1108 00:13:40.998394   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.001148   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001578   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.001612   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001940   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:13:41.002174   51228 machine.go:88] provisioning docker machine ...
	I1108 00:13:41.002197   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:41.002421   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002577   51228 buildroot.go:166] provisioning hostname "default-k8s-diff-port-039263"
	I1108 00:13:41.002600   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002800   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.005167   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005544   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.005584   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005873   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.006029   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006291   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.006425   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.007012   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.007036   51228 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-039263 && echo "default-k8s-diff-port-039263" | sudo tee /etc/hostname
	I1108 00:13:41.168664   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039263
	
	I1108 00:13:41.168698   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.171709   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172090   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.172132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172266   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.172457   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172650   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172867   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.173130   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.173626   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.173654   51228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-039263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-039263/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-039263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:41.324510   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:41.324539   51228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:41.324586   51228 buildroot.go:174] setting up certificates
	I1108 00:13:41.324598   51228 provision.go:83] configureAuth start
	I1108 00:13:41.324610   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.324933   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.327797   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.328213   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.330558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.330921   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.330955   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.331062   51228 provision.go:138] copyHostCerts
	I1108 00:13:41.331128   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:41.331150   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:41.331222   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:41.331337   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:41.331355   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:41.331387   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:41.331467   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:41.331479   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:41.331506   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:41.331592   51228 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-039263 san=[192.168.72.116 192.168.72.116 localhost 127.0.0.1 minikube default-k8s-diff-port-039263]
	I1108 00:13:41.452051   51228 provision.go:172] copyRemoteCerts
	I1108 00:13:41.452123   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:41.452156   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.454755   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455056   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.455089   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455288   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.455512   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.455704   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.455831   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:41.554387   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:41.586357   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:41.616703   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1108 00:13:41.646461   51228 provision.go:86] duration metric: configureAuth took 321.850044ms
	I1108 00:13:41.646489   51228 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:41.646730   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:41.646825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.650386   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.650813   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.650856   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.651031   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.651232   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651422   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.651763   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.652302   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.652325   51228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:42.006373   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:42.006401   51228 machine.go:91] provisioned docker machine in 1.004212938s
	I1108 00:13:42.006414   51228 start.go:300] post-start starting for "default-k8s-diff-port-039263" (driver="kvm2")
	I1108 00:13:42.006426   51228 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:42.006445   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.006785   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:42.006811   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.009619   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.009950   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.009986   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.010127   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.010344   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.010515   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.010673   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.106366   51228 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:42.110676   51228 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:42.110701   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:42.110770   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:42.110869   51228 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:42.110972   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:42.121223   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:42.146966   51228 start.go:303] post-start completed in 140.536976ms
	I1108 00:13:42.146996   51228 fix.go:56] fixHost completed within 22.681133015s
	I1108 00:13:42.147019   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.149705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.150165   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150406   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.150606   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150818   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150988   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.151156   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:42.151511   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:42.151523   51228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:42.285789   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402422.233004693
	
	I1108 00:13:42.285815   51228 fix.go:206] guest clock: 1699402422.233004693
	I1108 00:13:42.285823   51228 fix.go:219] Guest: 2023-11-08 00:13:42.233004693 +0000 UTC Remote: 2023-11-08 00:13:42.146999966 +0000 UTC m=+101.273648910 (delta=86.004727ms)
	I1108 00:13:42.285869   51228 fix.go:190] guest clock delta is within tolerance: 86.004727ms
	I1108 00:13:42.285877   51228 start.go:83] releasing machines lock for "default-k8s-diff-port-039263", held for 22.820045752s
	I1108 00:13:42.285913   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.286161   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:42.288711   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289095   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.289133   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289241   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.289864   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290109   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290209   51228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:42.290261   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.290323   51228 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:42.290345   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.293063   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293219   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293451   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293483   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293570   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293599   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.293878   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.293887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.294075   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.294085   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294234   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294280   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.294336   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.386493   51228 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:42.411009   51228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:42.558200   51228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:42.566040   51228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:42.566116   51228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:42.584775   51228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:42.584800   51228 start.go:472] detecting cgroup driver to use...
	I1108 00:13:42.584872   51228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:42.598720   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:42.612836   51228 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:42.612927   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:42.627474   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:42.641670   51228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:42.753616   51228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:42.888608   51228 docker.go:219] disabling docker service ...
	I1108 00:13:42.888680   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:42.903298   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:42.920184   51228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:43.054621   51228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:43.181836   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:43.198481   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:43.219759   51228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:43.219827   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.231137   51228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:43.231221   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.242206   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.253506   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.264311   51228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:43.276451   51228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:43.288448   51228 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:43.288522   51228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:43.305986   51228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:43.318366   51228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:43.479739   51228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:43.705223   51228 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:43.705302   51228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:43.711842   51228 start.go:540] Will wait 60s for crictl version
	I1108 00:13:43.711915   51228 ssh_runner.go:195] Run: which crictl
	I1108 00:13:43.717688   51228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:43.762492   51228 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:43.762651   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.814548   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.870144   51228 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:39.990811   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:40.020162   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:40.064758   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:40.081652   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:13:40.081705   50505 system_pods.go:61] "coredns-5dd5756b68-lhnz5" [936252ee-4f00-49e2-96e4-7c4f4a4ca378] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:40.081725   50505 system_pods.go:61] "etcd-no-preload-320390" [95e08672-dc80-4aa6-bd4a-e5f77bfc4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:40.081738   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [3261561e-b7d5-4302-8e0b-301d00407e8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:40.081748   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [b87602fd-b248-4529-9116-1851a4284bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:40.081763   50505 system_pods.go:61] "kube-proxy-c4mbm" [33806b69-57c0-4807-849b-b6a4f8a5db12] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:40.081777   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [4f7b4160-b99e-4f76-9b12-c5b1849c91b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:40.081791   50505 system_pods.go:61] "metrics-server-57f55c9bc5-th89c" [06aea7c0-065b-44a4-8d53-432f5722e937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:40.081810   50505 system_pods.go:61] "storage-provisioner" [c7b0810b-1ba7-4d56-ad97-3f04d771960d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:40.081823   50505 system_pods.go:74] duration metric: took 17.024016ms to wait for pod list to return data ...
	I1108 00:13:40.081836   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:40.093789   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:40.093827   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:40.093841   50505 node_conditions.go:105] duration metric: took 11.998569ms to run NodePressure ...
	I1108 00:13:40.093863   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:40.340962   50505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346004   50505 kubeadm.go:787] kubelet initialised
	I1108 00:13:40.346032   50505 kubeadm.go:788] duration metric: took 5.042344ms waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346044   50505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:40.355648   50505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:42.377985   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:42.313355   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Start
	I1108 00:13:42.313526   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring networks are active...
	I1108 00:13:42.314176   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network default is active
	I1108 00:13:42.314638   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network mk-old-k8s-version-590541 is active
	I1108 00:13:42.315060   50022 main.go:141] libmachine: (old-k8s-version-590541) Getting domain xml...
	I1108 00:13:42.315833   50022 main.go:141] libmachine: (old-k8s-version-590541) Creating domain...
	I1108 00:13:43.739499   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting to get IP...
	I1108 00:13:43.740647   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.741195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.741259   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.741155   51822 retry.go:31] will retry after 195.621332ms: waiting for machine to come up
	I1108 00:13:43.938557   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.939127   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.939268   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.939200   51822 retry.go:31] will retry after 278.651736ms: waiting for machine to come up
	I1108 00:13:44.219831   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.220473   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.220500   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.220418   51822 retry.go:31] will retry after 384.748872ms: waiting for machine to come up
	I1108 00:13:44.607110   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.607665   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.607696   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.607591   51822 retry.go:31] will retry after 401.60668ms: waiting for machine to come up
	I1108 00:13:43.871596   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:43.874814   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875307   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:43.875357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875575   51228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:43.880324   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:43.895271   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:43.895331   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:43.943120   51228 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:43.943238   51228 ssh_runner.go:195] Run: which lz4
	I1108 00:13:43.947723   51228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:43.952328   51228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:43.952365   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:45.857547   51228 crio.go:444] Took 1.909852 seconds to copy over tarball
	I1108 00:13:45.857623   51228 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:45.314087   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.314125   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.314144   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.333352   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.333384   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.833959   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.852530   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:45.852613   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.333996   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.346680   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:46.346714   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.833955   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.841287   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:13:46.853271   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:46.853299   50613 api_server.go:131] duration metric: took 6.372641273s to wait for apiserver health ...
	I1108 00:13:46.853310   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:46.853318   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:46.855336   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:46.856955   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:46.892049   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:46.933039   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:44.399678   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.879110   50505 pod_ready.go:92] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.879142   50505 pod_ready.go:81] duration metric: took 5.523463579s waiting for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.879154   50505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885356   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.885377   50505 pod_ready.go:81] duration metric: took 6.21581ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885385   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:47.914308   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.011074   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.011525   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.011560   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.011500   51822 retry.go:31] will retry after 708.154492ms: waiting for machine to come up
	I1108 00:13:45.720911   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.721383   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.721418   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.721294   51822 retry.go:31] will retry after 746.365542ms: waiting for machine to come up
	I1108 00:13:46.469031   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:46.469615   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:46.469641   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:46.469556   51822 retry.go:31] will retry after 924.305758ms: waiting for machine to come up
	I1108 00:13:47.395756   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:47.396297   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:47.396323   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:47.396241   51822 retry.go:31] will retry after 1.343866256s: waiting for machine to come up
	I1108 00:13:48.741427   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:48.741851   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:48.741883   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:48.741816   51822 retry.go:31] will retry after 1.388849147s: waiting for machine to come up
	I1108 00:13:49.625178   51228 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.76753046s)
	I1108 00:13:49.625229   51228 crio.go:451] Took 3.767633 seconds to extract the tarball
	I1108 00:13:49.625242   51228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:49.670263   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:49.727650   51228 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:49.727677   51228 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:49.727747   51228 ssh_runner.go:195] Run: crio config
	I1108 00:13:49.811565   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:13:49.811592   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:49.811615   51228 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:49.811639   51228 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.116 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-039263 NodeName:default-k8s-diff-port-039263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:49.811812   51228 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-039263"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:49.811906   51228 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-039263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1108 00:13:49.811984   51228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:49.822961   51228 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:49.823027   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:49.832632   51228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1108 00:13:49.850812   51228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:49.869345   51228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1108 00:13:49.887645   51228 ssh_runner.go:195] Run: grep 192.168.72.116	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:49.892538   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:49.907166   51228 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263 for IP: 192.168.72.116
	I1108 00:13:49.907205   51228 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:49.907374   51228 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:49.907425   51228 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:49.907523   51228 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.key
	I1108 00:13:49.907601   51228 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key.b2cbdf93
	I1108 00:13:49.907658   51228 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key
	I1108 00:13:49.907807   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:49.907851   51228 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:49.907872   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:49.907915   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:49.907951   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:49.907988   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:49.908046   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:49.908955   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:49.938941   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:49.964654   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:49.991354   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:50.018895   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:50.048330   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:50.076095   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:50.103752   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:50.130140   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:50.156862   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:50.181994   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:50.208069   51228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:50.226069   51228 ssh_runner.go:195] Run: openssl version
	I1108 00:13:50.232941   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:50.246981   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.252981   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.253059   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.260626   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:50.274135   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:50.285611   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290761   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290837   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.297508   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:50.308772   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:50.320122   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326021   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326083   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.332534   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:50.344381   51228 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:50.350040   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:50.356282   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:50.362850   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:50.378237   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:50.385607   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:50.392272   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:50.399220   51228 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port
-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:50.399304   51228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:50.399358   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:50.449693   51228 cri.go:89] found id: ""
	I1108 00:13:50.449770   51228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:50.460225   51228 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:50.460256   51228 kubeadm.go:636] restartCluster start
	I1108 00:13:50.460313   51228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:50.469777   51228 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.470973   51228 kubeconfig.go:92] found "default-k8s-diff-port-039263" server: "https://192.168.72.116:8444"
	I1108 00:13:50.473778   51228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:50.482964   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.483022   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.495100   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.495123   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.495186   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.508735   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:46.949012   50613 system_pods.go:59] 9 kube-system pods found
	I1108 00:13:46.950252   50613 system_pods.go:61] "coredns-5dd5756b68-7djdr" [a1459bf3-703b-418a-bc22-c98e285c6e31] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950302   50613 system_pods.go:61] "coredns-5dd5756b68-8qjbd" [fa7b05fd-725b-4c9c-815e-360f2bef8ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950336   50613 system_pods.go:61] "etcd-embed-certs-253253" [2631ed7d-3af4-4848-bbb8-c77038f8a1f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:46.950369   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [80b3e8da-6474-4fd8-bb86-0d9cc70086ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:46.950391   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [ee19def3-043a-4832-8153-52aaf8b4748a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:46.950407   50613 system_pods.go:61] "kube-proxy-rsgkf" [509d66e3-b034-4dcd-a16e-b2f93b9efa6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:46.950482   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [ef7bb9c3-98c8-45d8-8f54-852fb639b408] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:46.950497   50613 system_pods.go:61] "metrics-server-57f55c9bc5-s7ldx" [61cd423c-edbd-4d0c-87e8-1ac8e52c70e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:46.950507   50613 system_pods.go:61] "storage-provisioner" [d6157b7c-6b52-4ca8-a935-d68a0291305f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:46.950519   50613 system_pods.go:74] duration metric: took 17.457991ms to wait for pod list to return data ...
	I1108 00:13:46.950532   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:46.956062   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:46.956142   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:46.956165   50613 node_conditions.go:105] duration metric: took 5.622732ms to run NodePressure ...
	I1108 00:13:46.956193   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:47.272695   50613 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280001   50613 kubeadm.go:787] kubelet initialised
	I1108 00:13:47.280031   50613 kubeadm.go:788] duration metric: took 7.30064ms waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280041   50613 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:47.290043   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.378703   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:50.370740   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:51.912802   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.912845   50505 pod_ready.go:81] duration metric: took 6.027451924s waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.912861   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920043   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.920073   50505 pod_ready.go:81] duration metric: took 7.195906ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920085   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927863   50505 pod_ready.go:92] pod "kube-proxy-c4mbm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.927887   50505 pod_ready.go:81] duration metric: took 7.793258ms waiting for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927900   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934444   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.934470   50505 pod_ready.go:81] duration metric: took 6.560509ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934481   50505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.131947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:50.132491   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:50.132526   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:50.132397   51822 retry.go:31] will retry after 1.410573405s: waiting for machine to come up
	I1108 00:13:51.544674   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:51.545073   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:51.545099   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:51.545025   51822 retry.go:31] will retry after 1.773802671s: waiting for machine to come up
	I1108 00:13:53.320381   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:53.320863   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:53.320893   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:53.320805   51822 retry.go:31] will retry after 3.166868207s: waiting for machine to come up
	I1108 00:13:51.009734   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.009825   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.026052   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:51.509697   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.509786   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.527840   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.009557   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.009656   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.025049   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.509606   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.509707   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.526174   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.008803   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.008954   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.022472   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.508900   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.509005   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.525225   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.009884   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.009974   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.022171   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.509280   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.509376   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.522041   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.009670   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.009752   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.023035   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.509640   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.509717   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.526730   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.836317   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:53.332094   50613 pod_ready.go:92] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.332121   50613 pod_ready.go:81] duration metric: took 6.042047013s waiting for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.332133   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337858   50613 pod_ready.go:92] pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.337882   50613 pod_ready.go:81] duration metric: took 5.740229ms waiting for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337894   50613 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:55.356131   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:54.323357   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.328874   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.820773   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.490058   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:56.490553   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:56.490590   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:56.490511   51822 retry.go:31] will retry after 3.18441493s: waiting for machine to come up
	I1108 00:13:56.009549   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.009646   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.024559   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:56.508912   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.509015   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.521861   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.009408   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.009479   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.022156   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.509466   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.509554   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.522766   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.008909   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.009026   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.021521   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.509050   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.509134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.521387   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.008889   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.008975   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.021781   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.509489   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.509575   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.521581   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.009117   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:14:00.009196   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:00.022210   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.483934   51228 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:00.483990   51228 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:00.484004   51228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:00.484066   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:00.528120   51228 cri.go:89] found id: ""
	I1108 00:14:00.528178   51228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:00.544876   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:00.553827   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:00.553883   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562695   51228 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562721   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:00.676044   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:57.856242   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.855444   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.855471   50613 pod_ready.go:81] duration metric: took 5.517568786s waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.855479   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860431   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.860453   50613 pod_ready.go:81] duration metric: took 4.966273ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860464   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865854   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.865874   50613 pod_ready.go:81] duration metric: took 5.40177ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865914   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870805   50613 pod_ready.go:92] pod "kube-proxy-rsgkf" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.870826   50613 pod_ready.go:81] duration metric: took 4.898411ms waiting for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870835   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958009   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.958034   50613 pod_ready.go:81] duration metric: took 87.190501ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958052   50613 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:01.265674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:00.823696   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:03.322129   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:59.678086   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:59.678579   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:59.678598   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:59.678528   51822 retry.go:31] will retry after 4.30352873s: waiting for machine to come up
	I1108 00:14:03.983994   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984437   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has current primary IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984474   50022 main.go:141] libmachine: (old-k8s-version-590541) Found IP for machine: 192.168.50.49
	I1108 00:14:03.984489   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserving static IP address...
	I1108 00:14:03.984947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.984981   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | skip adding static IP to network mk-old-k8s-version-590541 - found existing host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"}
	I1108 00:14:03.985000   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserved static IP address: 192.168.50.49
	I1108 00:14:03.985020   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting for SSH to be available...
	I1108 00:14:03.985034   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Getting to WaitForSSH function...
	I1108 00:14:03.987671   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988083   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.988116   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988388   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH client type: external
	I1108 00:14:03.988424   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa (-rw-------)
	I1108 00:14:03.988461   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:14:03.988481   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | About to run SSH command:
	I1108 00:14:03.988496   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | exit 0
	I1108 00:14:04.080867   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | SSH cmd err, output: <nil>: 
	I1108 00:14:04.081275   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetConfigRaw
	I1108 00:14:04.081955   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.085061   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085512   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.085554   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085942   50022 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/config.json ...
	I1108 00:14:04.086165   50022 machine.go:88] provisioning docker machine ...
	I1108 00:14:04.086188   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:04.086417   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086612   50022 buildroot.go:166] provisioning hostname "old-k8s-version-590541"
	I1108 00:14:04.086634   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086822   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.089431   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.089808   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.089838   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.090007   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.090201   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090362   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090535   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.090686   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.090991   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.091002   50022 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-590541 && echo "old-k8s-version-590541" | sudo tee /etc/hostname
	I1108 00:14:04.228526   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-590541
	
	I1108 00:14:04.228561   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.232020   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232390   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.232454   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232743   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.232930   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233109   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233264   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.233430   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.233786   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.233812   50022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-590541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-590541/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-590541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:14:04.370396   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:14:04.370424   50022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:14:04.370469   50022 buildroot.go:174] setting up certificates
	I1108 00:14:04.370487   50022 provision.go:83] configureAuth start
	I1108 00:14:04.370505   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.370779   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.373683   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374081   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.374111   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374240   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.377048   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377441   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.377469   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377596   50022 provision.go:138] copyHostCerts
	I1108 00:14:04.377658   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:14:04.377678   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:14:04.377748   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:14:04.377855   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:14:04.377867   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:14:04.377893   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:14:04.377965   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:14:04.377979   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:14:04.378005   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:14:04.378064   50022 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-590541 san=[192.168.50.49 192.168.50.49 localhost 127.0.0.1 minikube old-k8s-version-590541]
	I1108 00:14:04.534682   50022 provision.go:172] copyRemoteCerts
	I1108 00:14:04.534750   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:14:04.534778   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.538002   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538379   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.538408   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538639   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.538789   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.538975   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.539146   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:04.632308   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:14:01.961492   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.285410864s)
	I1108 00:14:01.961529   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.165604   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.235655   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.352126   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:02.352212   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.370538   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.884696   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.384139   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.884529   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.384134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.884877   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.913244   51228 api_server.go:72] duration metric: took 2.56112461s to wait for apiserver process to appear ...
	I1108 00:14:04.913273   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:04.913295   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:04.657542   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:14:04.682815   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:14:04.709405   50022 provision.go:86] duration metric: configureAuth took 338.902281ms
	I1108 00:14:04.709439   50022 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:14:04.709651   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:14:04.709741   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.713141   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713520   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.713561   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713718   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.713923   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714108   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714259   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.714497   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.714885   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.714905   50022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:14:05.055346   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:14:05.055427   50022 machine.go:91] provisioned docker machine in 969.247821ms
	I1108 00:14:05.055446   50022 start.go:300] post-start starting for "old-k8s-version-590541" (driver="kvm2")
	I1108 00:14:05.055459   50022 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:14:05.055493   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.055841   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:14:05.055895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.058959   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059423   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.059457   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059601   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.059775   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.059895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.060042   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.151543   50022 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:14:05.155876   50022 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:14:05.155902   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:14:05.155969   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:14:05.156056   50022 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:14:05.156229   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:14:05.165742   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:05.190622   50022 start.go:303] post-start completed in 135.159333ms
	I1108 00:14:05.190648   50022 fix.go:56] fixHost completed within 22.904612851s
	I1108 00:14:05.190673   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.193716   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194165   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.194195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194480   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.194725   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.194929   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.195106   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.195260   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:05.195755   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:05.195778   50022 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:14:05.326443   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402445.269657345
	
	I1108 00:14:05.326467   50022 fix.go:206] guest clock: 1699402445.269657345
	I1108 00:14:05.326476   50022 fix.go:219] Guest: 2023-11-08 00:14:05.269657345 +0000 UTC Remote: 2023-11-08 00:14:05.190652611 +0000 UTC m=+370.589908297 (delta=79.004734ms)
	I1108 00:14:05.326524   50022 fix.go:190] guest clock delta is within tolerance: 79.004734ms
	I1108 00:14:05.326531   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 23.040527062s
	I1108 00:14:05.326558   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.326845   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:05.329775   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330225   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.330254   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330447   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331102   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331338   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331424   50022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:14:05.331467   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.331584   50022 ssh_runner.go:195] Run: cat /version.json
	I1108 00:14:05.331610   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.334586   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.334817   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335125   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335182   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335225   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335307   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335339   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335418   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335536   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335603   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.335774   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335783   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.335906   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.336063   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.423679   50022 ssh_runner.go:195] Run: systemctl --version
	I1108 00:14:05.446956   50022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:14:05.598713   50022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:14:05.605558   50022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:14:05.605641   50022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:14:05.620183   50022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:14:05.620211   50022 start.go:472] detecting cgroup driver to use...
	I1108 00:14:05.620277   50022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:14:05.635981   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:14:05.649637   50022 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:14:05.649699   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:14:05.664232   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:14:05.678205   50022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:14:05.791991   50022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:14:05.925002   50022 docker.go:219] disabling docker service ...
	I1108 00:14:05.925135   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:14:05.939853   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:14:05.955518   50022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:14:06.074872   50022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:14:06.189371   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:14:06.202247   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:14:06.219012   50022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1108 00:14:06.219082   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.229837   50022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:14:06.229911   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.239769   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.248633   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.257717   50022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:14:06.268893   50022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:14:06.277427   50022 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:14:06.277495   50022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:14:06.290771   50022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:14:06.299918   50022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:14:06.421038   50022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:14:06.587544   50022 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:14:06.587624   50022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:14:06.592726   50022 start.go:540] Will wait 60s for crictl version
	I1108 00:14:06.592781   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:06.596695   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:14:06.637642   50022 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:14:06.637733   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.690026   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.740455   50022 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1108 00:14:03.266720   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.764837   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.322160   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:07.329491   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:06.741799   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:06.744301   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744599   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:06.744630   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744861   50022 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1108 00:14:06.749385   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:06.762645   50022 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1108 00:14:06.762732   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:06.804386   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:06.804458   50022 ssh_runner.go:195] Run: which lz4
	I1108 00:14:06.808948   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:14:06.813319   50022 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:14:06.813355   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1108 00:14:08.476578   50022 crio.go:444] Took 1.667668 seconds to copy over tarball
	I1108 00:14:08.476646   50022 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:14:09.078810   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.078843   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.078859   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.140049   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.140083   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.641000   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.647216   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:09.647247   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.140446   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.148995   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:10.149028   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.640719   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.649076   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:14:10.660508   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:14:10.660545   51228 api_server.go:131] duration metric: took 5.747263547s to wait for apiserver health ...
	I1108 00:14:10.660556   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:14:10.660566   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:10.662644   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:10.664069   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:10.682131   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:10.709582   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:10.725779   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:14:10.725840   51228 system_pods.go:61] "coredns-5dd5756b68-rz9t4" [d7b24f41-ed9e-4b07-991b-8587f49d7902] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:14:10.725854   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [f58b5fbb-a565-4d47-8b3d-ea62169dc0fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:14:10.725868   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [d0c3391c-679f-49ad-a6ff-ef62d74a62ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:14:10.725882   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [33f54c9b-cc67-4662-8db9-c735fde4d9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:14:10.725903   51228 system_pods.go:61] "kube-proxy-z7b8g" [079a28b1-dbad-4e62-a9ea-b667206433cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:14:10.725914   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [629f940b-6d2a-4c3c-8a11-2805dc2c04d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:14:10.725927   51228 system_pods.go:61] "metrics-server-57f55c9bc5-nlhpn" [f5d69cb1-4266-45fc-9bab-57053f915aa0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:14:10.725941   51228 system_pods.go:61] "storage-provisioner" [fb6541da-3ed3-4abb-b534-643bb5faf7d3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:14:10.725953   51228 system_pods.go:74] duration metric: took 16.346941ms to wait for pod list to return data ...
	I1108 00:14:10.725965   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:10.730466   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:10.730555   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:10.730574   51228 node_conditions.go:105] duration metric: took 4.602969ms to run NodePressure ...
	I1108 00:14:10.730595   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:07.772448   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:10.267241   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:09.824633   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.829090   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.015104   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.781938   50022 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.305246635s)
	I1108 00:14:11.781979   50022 crio.go:451] Took 3.305377 seconds to extract the tarball
	I1108 00:14:11.781999   50022 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:14:11.837911   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:11.907599   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:11.907634   50022 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:14:11.907702   50022 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.907965   50022 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.907983   50022 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.907966   50022 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.908257   50022 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.908365   50022 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1108 00:14:11.909163   50022 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.909239   50022 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.909251   50022 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.909332   50022 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.909171   50022 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.909397   50022 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.909435   50022 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.909625   50022 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1108 00:14:12.040043   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.042004   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1108 00:14:12.047478   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.051016   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.095045   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.126645   50022 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1108 00:14:12.126718   50022 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.126788   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.133035   50022 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1108 00:14:12.133078   50022 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1108 00:14:12.133120   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.164621   50022 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1108 00:14:12.164686   50022 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.164754   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.182223   50022 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1108 00:14:12.182267   50022 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.182318   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201151   50022 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1108 00:14:12.201196   50022 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.201244   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201255   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.201306   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1108 00:14:12.201305   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.201341   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.203375   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.208529   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.341873   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1108 00:14:12.341901   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1108 00:14:12.341954   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.341960   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1108 00:14:12.356561   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1108 00:14:12.356663   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.361927   50022 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1108 00:14:12.361962   50022 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.362023   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.382770   50022 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1108 00:14:12.382819   50022 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.382864   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.406169   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1108 00:14:12.406213   50022 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1108 00:14:12.406228   50022 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406273   50022 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406313   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.406274   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.863910   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:14.488498   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0: (2.082152502s)
	I1108 00:14:14.488536   50022 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.082234083s)
	I1108 00:14:14.488548   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1108 00:14:14.488571   50022 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1108 00:14:14.488623   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0: (2.082249259s)
	I1108 00:14:14.488666   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1108 00:14:14.488711   50022 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.624766966s)
	I1108 00:14:14.488762   50022 cache_images.go:92] LoadImages completed in 2.581114029s
	W1108 00:14:14.488842   50022 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1108 00:14:14.488915   50022 ssh_runner.go:195] Run: crio config
	I1108 00:14:14.557127   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:14.557155   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:14.557176   50022 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:14:14.557204   50022 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.49 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-590541 NodeName:old-k8s-version-590541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1108 00:14:14.557391   50022 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-590541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-590541
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.49:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:14:14.557508   50022 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-590541 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:14:14.557579   50022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1108 00:14:14.568423   50022 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:14:14.568501   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:14:14.578581   50022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1108 00:14:14.596389   50022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:14:14.613956   50022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1108 00:14:14.631988   50022 ssh_runner.go:195] Run: grep 192.168.50.49	control-plane.minikube.internal$ /etc/hosts
	I1108 00:14:14.636236   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:14.648849   50022 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541 for IP: 192.168.50.49
	I1108 00:14:14.648888   50022 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:14:14.649071   50022 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:14:14.649126   50022 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:14:14.649231   50022 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.key
	I1108 00:14:14.649312   50022 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key.5b7c76e3
	I1108 00:14:14.649375   50022 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key
	I1108 00:14:14.649542   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:14:14.649587   50022 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:14:14.649597   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:14:14.649636   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:14:14.649677   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:14:14.649714   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:14:14.649771   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:11.058474   51228 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064805   51228 kubeadm.go:787] kubelet initialised
	I1108 00:14:11.064852   51228 kubeadm.go:788] duration metric: took 6.346592ms waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064863   51228 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:14:11.073499   51228 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.089759   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089791   51228 pod_ready.go:81] duration metric: took 16.257238ms waiting for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.089803   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089811   51228 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.100580   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100605   51228 pod_ready.go:81] duration metric: took 10.783802ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.100615   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100621   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.113797   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113826   51228 pod_ready.go:81] duration metric: took 13.195367ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.113838   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113847   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.124704   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124736   51228 pod_ready.go:81] duration metric: took 10.87946ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.124750   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124760   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915650   51228 pod_ready.go:92] pod "kube-proxy-z7b8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:11.915674   51228 pod_ready.go:81] duration metric: took 790.904941ms waiting for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915686   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:14.011244   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:12.537889   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.767882   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:16.322840   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.323955   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.650662   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:14:14.682536   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 00:14:14.708618   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:14:14.737947   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 00:14:14.768365   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:14:14.795469   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:14:14.824086   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:14:14.851375   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:14:14.878638   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:14:14.906647   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:14:14.933316   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:14:14.961937   50022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:14:14.980167   50022 ssh_runner.go:195] Run: openssl version
	I1108 00:14:14.986053   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:14:14.996201   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001410   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001490   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.008681   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:14:15.022034   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:14:15.031992   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037854   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037910   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.045107   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:14:15.057464   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:14:15.070137   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075848   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075917   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.083414   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:14:15.094499   50022 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:14:15.099437   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:14:15.105940   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:14:15.112527   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:14:15.118429   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:14:15.124769   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:14:15.130975   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:14:15.136772   50022 kubeadm.go:404] StartCluster: {Name:old-k8s-version-590541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:14:15.136903   50022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:14:15.136952   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:15.184018   50022 cri.go:89] found id: ""
	I1108 00:14:15.184095   50022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:14:15.196900   50022 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:14:15.196924   50022 kubeadm.go:636] restartCluster start
	I1108 00:14:15.196994   50022 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:14:15.208810   50022 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.210399   50022 kubeconfig.go:92] found "old-k8s-version-590541" server: "https://192.168.50.49:8443"
	I1108 00:14:15.214114   50022 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:14:15.223586   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.223644   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.234506   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.234525   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.234565   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.244971   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.745626   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.745698   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.757830   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.246012   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.246090   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.258583   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.745965   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.746045   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.758317   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.245985   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.246087   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.257615   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.745646   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.745715   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.757591   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.245666   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.245773   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.258225   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.745765   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.745842   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.756699   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:19.245946   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.246016   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.258255   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.222461   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.722269   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:18.722291   51228 pod_ready.go:81] duration metric: took 6.806598217s waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:18.722300   51228 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:20.739081   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:17.264976   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.265242   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:21.265825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:20.822592   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.321115   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.745997   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.746135   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.757885   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.245884   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.245988   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.258408   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.745963   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.746035   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.757892   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.246052   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.246133   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.258401   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.745947   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.746040   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.759160   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.246004   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.246075   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.258859   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.745787   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.745889   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.758099   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.245961   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.246068   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.258810   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.745167   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.745248   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.757093   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:24.245690   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.245751   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.258264   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.739380   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.739502   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.766235   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:26.264779   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:25.322215   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:27.322896   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.745944   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.746024   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.759229   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:25.224130   50022 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:25.224188   50022 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:25.224207   50022 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:25.224267   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:25.271348   50022 cri.go:89] found id: ""
	I1108 00:14:25.271418   50022 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:25.287540   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:25.296398   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:25.296452   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305111   50022 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305137   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:25.434385   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.361847   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.561621   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.667973   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.798155   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:26.798240   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:26.822210   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.335493   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.836175   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.336398   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.836400   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.862790   50022 api_server.go:72] duration metric: took 2.064638513s to wait for apiserver process to appear ...
	I1108 00:14:28.862814   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:28.862827   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:26.740013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.740958   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.266931   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:30.765036   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:29.827237   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:32.323375   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.863452   50022 api_server.go:269] stopped: https://192.168.50.49:8443/healthz: Get "https://192.168.50.49:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 00:14:33.863495   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:34.513495   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:34.513530   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:31.240440   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.739764   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.014492   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.020991   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.021019   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:35.514559   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.521451   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.521475   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:36.014620   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:36.021243   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:14:36.029191   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:14:36.029214   50022 api_server.go:131] duration metric: took 7.166394703s to wait for apiserver health ...
	I1108 00:14:36.029225   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:36.029232   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:36.030800   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:32.765436   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:34.825199   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.322438   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:36.032078   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:36.042827   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:36.062239   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:36.070373   50022 system_pods.go:59] 7 kube-system pods found
	I1108 00:14:36.070404   50022 system_pods.go:61] "coredns-5644d7b6d9-cmx8s" [510a3ae2-abff-40f9-8605-7fd6cc5316de] Running
	I1108 00:14:36.070414   50022 system_pods.go:61] "etcd-old-k8s-version-590541" [4597d43f-d424-4591-8a5c-6e4a7d60bb2b] Running
	I1108 00:14:36.070420   50022 system_pods.go:61] "kube-apiserver-old-k8s-version-590541" [353c1157-7cac-4809-91ea-30745ecbc10c] Running
	I1108 00:14:36.070427   50022 system_pods.go:61] "kube-controller-manager-old-k8s-version-590541" [30679f8f-aa28-4349-ada1-97af45c0c065] Running
	I1108 00:14:36.070432   50022 system_pods.go:61] "kube-proxy-r8p96" [21ac95e4-595f-4520-8174-ef5e1334c1be] Running
	I1108 00:14:36.070437   50022 system_pods.go:61] "kube-scheduler-old-k8s-version-590541" [f406d277-d786-417a-9428-8433143db81c] Running
	I1108 00:14:36.070443   50022 system_pods.go:61] "storage-provisioner" [26f85033-bd24-4332-ba8d-1aed49559417] Running
	I1108 00:14:36.070452   50022 system_pods.go:74] duration metric: took 8.188793ms to wait for pod list to return data ...
	I1108 00:14:36.070461   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:36.075209   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:36.075242   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:36.075259   50022 node_conditions.go:105] duration metric: took 4.788324ms to run NodePressure ...
	I1108 00:14:36.075286   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:36.310748   50022 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:36.319886   50022 retry.go:31] will retry after 259.644928ms: kubelet not initialised
	I1108 00:14:36.584728   50022 retry.go:31] will retry after 259.541836ms: kubelet not initialised
	I1108 00:14:36.851013   50022 retry.go:31] will retry after 319.229418ms: kubelet not initialised
	I1108 00:14:37.192544   50022 retry.go:31] will retry after 949.166954ms: kubelet not initialised
	I1108 00:14:38.149087   50022 retry.go:31] will retry after 1.159461481s: kubelet not initialised
	I1108 00:14:39.313777   50022 retry.go:31] will retry after 1.441288405s: kubelet not initialised
	I1108 00:14:36.240206   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:38.240974   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.739451   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.266643   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.267727   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.765636   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.323180   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.323278   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:43.821724   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.762380   50022 retry.go:31] will retry after 2.811416386s: kubelet not initialised
	I1108 00:14:43.579217   50022 retry.go:31] will retry after 4.427599597s: kubelet not initialised
	I1108 00:14:42.739823   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.238841   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:44.266015   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:46.766564   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.822389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:47.822637   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:48.011401   50022 retry.go:31] will retry after 9.583320686s: kubelet not initialised
	I1108 00:14:47.239708   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.739520   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.264876   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.265467   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:50.321858   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:52.823189   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.740005   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:54.239137   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:53.267904   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.767709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.321381   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.600096   50022 retry.go:31] will retry after 8.628668417s: kubelet not initialised
	I1108 00:14:56.242527   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.740775   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.742908   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.263898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.264487   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:59.822276   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.322959   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.744271   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:05.239364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.764787   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.767529   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.821706   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.822611   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:08.822950   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.235557   50022 retry.go:31] will retry after 18.967803661s: kubelet not initialised
	I1108 00:15:07.239957   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.243640   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:07.268913   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.765546   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:10.823397   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:13.320774   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:11.741381   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.239143   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:12.265009   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.265329   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.265470   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:15.322148   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:17.821371   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.740364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.742058   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.267349   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:20.763380   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:19.821495   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.822583   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.239196   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:23.239716   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.740472   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:22.764934   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.264695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:24.322074   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:26.324255   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:28.823261   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.208456   50022 kubeadm.go:787] kubelet initialised
	I1108 00:15:25.208482   50022 kubeadm.go:788] duration metric: took 48.897709945s waiting for restarted kubelet to initialise ...
	I1108 00:15:25.208492   50022 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:15:25.213730   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220419   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.220444   50022 pod_ready.go:81] duration metric: took 6.688227ms waiting for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220455   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225713   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.225734   50022 pod_ready.go:81] duration metric: took 5.271879ms waiting for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225742   50022 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231081   50022 pod_ready.go:92] pod "etcd-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.231102   50022 pod_ready.go:81] duration metric: took 5.353373ms waiting for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231113   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235653   50022 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.235676   50022 pod_ready.go:81] duration metric: took 4.554135ms waiting for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235687   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607677   50022 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.607702   50022 pod_ready.go:81] duration metric: took 372.006515ms waiting for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607715   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007866   50022 pod_ready.go:92] pod "kube-proxy-r8p96" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.007901   50022 pod_ready.go:81] duration metric: took 400.175462ms waiting for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007915   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.408998   50022 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.409023   50022 pod_ready.go:81] duration metric: took 401.100386ms waiting for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.409037   50022 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:28.714602   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.743907   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.242025   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.764799   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:29.765943   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:31.322316   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.821723   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.715349   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.213961   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.739648   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.238544   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.270073   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:34.764272   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.768065   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.322383   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:38.821688   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.215842   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.714618   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.239003   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.239229   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.266142   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.765225   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.822847   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.823419   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.214573   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.214623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.239832   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.740100   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.765773   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.767613   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.323162   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:47.323716   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:44.714312   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.714541   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.214939   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.238097   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.240079   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.740404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.266155   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.821171   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.821247   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.821754   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.715388   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.214072   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.239902   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.240606   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:52.764709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.765802   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.821843   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.822037   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:56.214628   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:58.215873   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.739805   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.742442   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.264640   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.265598   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:01.269674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.823743   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.321221   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:00.716761   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.717300   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.240157   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.740325   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:03.765956   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.266810   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.322200   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.325043   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.822004   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:05.214678   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:07.214757   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.741067   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.238455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.764592   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:10.764740   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.321882   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.323997   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.715347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:12.215814   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.238960   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.239188   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.239933   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.268590   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.767860   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.822286   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.323447   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:14.715001   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.214864   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:19.220945   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.743653   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.239877   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.267403   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.765825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.828982   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:23.322508   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:21.715604   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.215532   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.240232   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.240410   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.767921   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.266374   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.821672   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.323033   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.715605   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.215673   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.240493   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.739795   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:27.268851   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.765296   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.822234   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.822653   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:31.714216   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.714677   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.238984   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.239828   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.264549   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.765297   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.823243   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.321349   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.715073   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.715879   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.240347   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.739526   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.265284   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.764898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.322588   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:41.822017   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:40.214804   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.714783   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.238649   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.238830   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.265404   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.266352   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.763687   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.321389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.322294   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.822670   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:45.215415   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:47.715215   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.239884   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.740698   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:50.740725   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.765820   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.265744   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.321664   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.321945   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:49.715720   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:52.215540   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.239897   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.241013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.764035   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.767704   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.324156   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.821380   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:54.716014   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.213472   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.216084   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.740250   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.740808   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:58.264915   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:00.764064   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.823358   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.824897   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.827668   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.714273   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.714538   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.238718   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:04.239300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.766695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:05.268491   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.321926   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.822906   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.215268   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.215344   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.740893   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.240404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:07.764370   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.764952   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.765807   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.823030   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.320640   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.715494   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.214139   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.741308   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.741849   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:14.265117   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.265550   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.322703   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.822360   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.214808   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.214944   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:19.215663   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.239627   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.241991   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.742074   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.764043   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.764244   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.322245   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:22.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:21.715000   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.715813   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.240800   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.741203   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.264974   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.267122   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:24.823144   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.322674   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:26.215099   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.215710   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.242151   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.741098   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.765060   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.266360   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:29.821467   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:31.822093   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.714747   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.716931   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:33.241199   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.744300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.765221   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.766163   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.320569   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:36.321680   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.321803   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.215458   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.715660   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.241103   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.241689   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.264893   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:39.264980   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:41.764589   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.323069   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.822323   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.214357   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.215838   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.738943   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.738995   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.265516   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.764435   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.827347   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:47.321911   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.715762   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.716679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.214899   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.740204   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.766668   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.266657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.822604   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.823333   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.935354   50505 pod_ready.go:81] duration metric: took 4m0.000854035s waiting for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:51.935397   50505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:51.935438   50505 pod_ready.go:38] duration metric: took 4m11.589382956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:51.935470   50505 kubeadm.go:640] restartCluster took 4m31.32204509s
	W1108 00:17:51.935533   50505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:51.935560   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:17:51.715171   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.716530   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.244682   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.741272   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.743900   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.765757   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.766672   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:56.218347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.715621   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.246553   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:00.740366   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.265496   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.958296   50613 pod_ready.go:81] duration metric: took 4m0.000224971s waiting for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:58.958324   50613 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:58.958349   50613 pod_ready.go:38] duration metric: took 4m11.678298333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:58.958373   50613 kubeadm.go:640] restartCluster took 4m32.361691152s
	W1108 00:17:58.958429   50613 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:58.958455   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:01.214685   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.216848   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.239882   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:05.739403   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:06.321352   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.385768547s)
	I1108 00:18:06.321435   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:06.335385   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:06.345310   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:06.355261   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:06.355301   50505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:06.570938   50505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:05.715384   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.716056   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.739455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.740028   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.716612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:12.215477   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:11.742123   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:14.242024   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:15.847386   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.888899647s)
	I1108 00:18:15.847471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:15.865800   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:15.877857   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:15.888952   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:15.889014   50613 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:16.126155   50613 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:17.730060   50505 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:17.730164   50505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:17.730282   50505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:17.730411   50505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:17.730564   50505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:17.730648   50505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:17.732613   50505 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:17.732709   50505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:17.732788   50505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:17.732916   50505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:17.732995   50505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:17.733104   50505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:17.733186   50505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:17.733265   50505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:17.733344   50505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:17.733429   50505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:17.733526   50505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:17.733572   50505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:17.733640   50505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:17.733699   50505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:17.733763   50505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:17.733838   50505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:17.733905   50505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:17.734002   50505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:17.734088   50505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:17.735708   50505 out.go:204]   - Booting up control plane ...
	I1108 00:18:17.735808   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:17.735898   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:17.735981   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:17.736113   50505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:17.736209   50505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:17.736255   50505 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:17.736431   50505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:17.736517   50505 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503639 seconds
	I1108 00:18:17.736637   50505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:17.736779   50505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:17.736873   50505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:17.737093   50505 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-320390 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:17.737168   50505 kubeadm.go:322] [bootstrap-token] Using token: 8lntxi.1hule2axpc9kkhcs
	I1108 00:18:17.738763   50505 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:17.738904   50505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:17.739014   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:17.739197   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:17.739364   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:17.739534   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:17.739651   50505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:17.739781   50505 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:17.739829   50505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:17.739881   50505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:17.739889   50505 kubeadm.go:322] 
	I1108 00:18:17.739956   50505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:17.739964   50505 kubeadm.go:322] 
	I1108 00:18:17.740051   50505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:17.740065   50505 kubeadm.go:322] 
	I1108 00:18:17.740094   50505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:17.740165   50505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:17.740229   50505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:17.740239   50505 kubeadm.go:322] 
	I1108 00:18:17.740311   50505 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:17.740320   50505 kubeadm.go:322] 
	I1108 00:18:17.740375   50505 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:17.740385   50505 kubeadm.go:322] 
	I1108 00:18:17.740443   50505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:17.740528   50505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:17.740629   50505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:17.740640   50505 kubeadm.go:322] 
	I1108 00:18:17.740733   50505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:17.740840   50505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:17.740860   50505 kubeadm.go:322] 
	I1108 00:18:17.740959   50505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741077   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:17.741106   50505 kubeadm.go:322] 	--control-plane 
	I1108 00:18:17.741114   50505 kubeadm.go:322] 
	I1108 00:18:17.741207   50505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:17.741221   50505 kubeadm.go:322] 
	I1108 00:18:17.741312   50505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741435   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:17.741451   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:18:17.741460   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:17.742996   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:17.744307   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:17.800065   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:17.844561   50505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:17.844628   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:17.844636   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=no-preload-320390 minikube.k8s.io/updated_at=2023_11_08T00_18_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.268124   50505 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:18.268268   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.391271   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.999821   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:14.715492   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.716036   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:19.217395   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.739748   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:18.722551   51228 pod_ready.go:81] duration metric: took 4m0.000232672s waiting for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:18.722600   51228 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:18:18.722616   51228 pod_ready.go:38] duration metric: took 4m7.657742468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:18.722637   51228 kubeadm.go:640] restartCluster took 4m28.262375275s
	W1108 00:18:18.722722   51228 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:18:18.722756   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:19.500069   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.000575   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.500545   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.999918   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.499960   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.000673   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.499811   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.000501   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.499942   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.000407   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.217427   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:23.715751   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:27.224428   50613 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:27.224497   50613 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:27.224589   50613 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:27.224720   50613 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:27.224916   50613 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:27.225019   50613 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:27.226893   50613 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:27.227001   50613 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:27.227091   50613 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:27.227201   50613 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:27.227279   50613 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:27.227365   50613 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:27.227433   50613 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:27.227517   50613 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:27.227602   50613 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:27.227719   50613 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:27.227808   50613 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:27.227864   50613 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:27.227938   50613 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:27.228013   50613 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:27.228102   50613 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:27.228186   50613 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:27.228264   50613 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:27.228387   50613 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:27.228479   50613 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:27.229827   50613 out.go:204]   - Booting up control plane ...
	I1108 00:18:27.229950   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:27.230032   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:27.230124   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:27.230265   50613 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:27.230387   50613 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:27.230447   50613 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:27.230699   50613 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:27.230810   50613 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503846 seconds
	I1108 00:18:27.230970   50613 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:27.231145   50613 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:27.231237   50613 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:27.231478   50613 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-253253 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:27.231573   50613 kubeadm.go:322] [bootstrap-token] Using token: vyjibp.12wjj754q6czu5uo
	I1108 00:18:27.233159   50613 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:27.233266   50613 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:27.233340   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:27.233454   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:27.233558   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:27.233693   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:27.233793   50613 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:27.233943   50613 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:27.234012   50613 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:27.234074   50613 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:27.234086   50613 kubeadm.go:322] 
	I1108 00:18:27.234174   50613 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:27.234191   50613 kubeadm.go:322] 
	I1108 00:18:27.234300   50613 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:27.234310   50613 kubeadm.go:322] 
	I1108 00:18:27.234337   50613 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:27.234388   50613 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:27.234432   50613 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:27.234436   50613 kubeadm.go:322] 
	I1108 00:18:27.234490   50613 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:27.234507   50613 kubeadm.go:322] 
	I1108 00:18:27.234567   50613 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:27.234577   50613 kubeadm.go:322] 
	I1108 00:18:27.234651   50613 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:27.234756   50613 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:27.234858   50613 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:27.234873   50613 kubeadm.go:322] 
	I1108 00:18:27.234959   50613 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:27.235056   50613 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:27.235066   50613 kubeadm.go:322] 
	I1108 00:18:27.235184   50613 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235334   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:27.235369   50613 kubeadm.go:322] 	--control-plane 
	I1108 00:18:27.235378   50613 kubeadm.go:322] 
	I1108 00:18:27.235476   50613 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:27.235487   50613 kubeadm.go:322] 
	I1108 00:18:27.235585   50613 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235734   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:27.235751   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:18:27.235759   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:27.237411   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:24.499703   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.999659   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:25.499724   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.000534   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.500532   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.999903   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.500582   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.000156   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.500443   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.000019   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.213623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:28.214432   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:29.500525   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.999698   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.173272   50505 kubeadm.go:1081] duration metric: took 12.328709999s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:30.173304   50505 kubeadm.go:406] StartCluster complete in 5m9.613679996s
	I1108 00:18:30.173323   50505 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.173399   50505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:30.175022   50505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.175277   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:30.175394   50505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:30.175512   50505 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320390"
	I1108 00:18:30.175534   50505 addons.go:231] Setting addon storage-provisioner=true in "no-preload-320390"
	W1108 00:18:30.175546   50505 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:30.175591   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.175595   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:30.175648   50505 addons.go:69] Setting default-storageclass=true in profile "no-preload-320390"
	I1108 00:18:30.175669   50505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320390"
	I1108 00:18:30.175856   50505 addons.go:69] Setting metrics-server=true in profile "no-preload-320390"
	I1108 00:18:30.175880   50505 addons.go:231] Setting addon metrics-server=true in "no-preload-320390"
	W1108 00:18:30.175890   50505 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:30.175932   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.176004   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176047   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176074   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176110   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176255   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176297   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.193487   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
	I1108 00:18:30.194065   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.194643   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I1108 00:18:30.194791   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.194809   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195197   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.195244   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195454   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1108 00:18:30.195741   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.195758   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195840   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195975   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.196019   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.196254   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.196377   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.196401   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.196444   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.196747   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.197318   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.197365   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.200432   50505 addons.go:231] Setting addon default-storageclass=true in "no-preload-320390"
	W1108 00:18:30.200454   50505 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:30.200482   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.200858   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.200904   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.214840   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
	I1108 00:18:30.215335   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.215693   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.215710   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.216018   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.216163   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.216761   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I1108 00:18:30.217467   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.218005   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.218255   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.218276   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.218567   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.218686   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.218895   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I1108 00:18:30.219282   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.221453   50505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:30.219887   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.220152   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.227122   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.227187   50505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.227203   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:30.227220   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.229126   50505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:30.227716   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.230458   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231018   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.231625   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.231640   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:30.231664   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231663   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:30.231687   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.231871   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.232040   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.232130   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.232164   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.232167   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.234984   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235307   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.235327   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235589   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.235819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.236102   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.236409   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.248939   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I1108 00:18:30.249596   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.250088   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.250105   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.250535   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.250715   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.252631   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.252909   50505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.252923   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:30.252941   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.255926   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256320   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.256354   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256440   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.256639   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.256795   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.257009   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.299537   50505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-320390" context rescaled to 1 replicas
	I1108 00:18:30.299586   50505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:30.301520   50505 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:27.238758   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:27.263679   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:27.350198   50613 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:27.350271   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.350293   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=embed-certs-253253 minikube.k8s.io/updated_at=2023_11_08T00_18_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.409145   50613 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:27.761874   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.882030   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.495425   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.995764   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.495154   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.994859   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.495492   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.995328   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:31.495353   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.303227   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:30.426941   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:30.426964   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:30.450862   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.456250   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.482239   50505 node_ready.go:35] waiting up to 6m0s for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.482286   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:30.493041   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:30.493073   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:30.542548   50505 node_ready.go:49] node "no-preload-320390" has status "Ready":"True"
	I1108 00:18:30.542579   50505 node_ready.go:38] duration metric: took 60.300148ms waiting for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.542593   50505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:30.554527   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:30.554560   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:30.648882   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:30.658134   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:32.959227   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.50832393s)
	I1108 00:18:32.959242   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.502960333s)
	I1108 00:18:32.959281   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959287   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476976723s)
	I1108 00:18:32.959301   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959347   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959307   50505 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:32.959293   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959711   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959729   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959748   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959761   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959771   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959780   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959795   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959807   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.960123   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960137   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.960207   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:32.960229   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960237   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.007609   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.007641   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.007926   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.007945   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.106167   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.284838   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.626637787s)
	I1108 00:18:33.284900   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.284916   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285239   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285259   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285269   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.285278   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285579   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285612   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285626   50505 addons.go:467] Verifying addon metrics-server=true in "no-preload-320390"
	I1108 00:18:33.285579   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:33.288563   50505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:18:33.290062   50505 addons.go:502] enable addons completed in 3.114669599s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:18:30.231324   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:32.715318   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.473926   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.751140561s)
	I1108 00:18:33.473999   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:33.489630   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:33.501413   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:33.513531   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:33.513588   51228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:33.767243   51228 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:31.995169   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.494991   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.995423   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.494761   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.995099   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.494829   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.995699   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.495034   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.995563   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:36.494752   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.563227   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:37.563703   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:34.715399   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.717212   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:39.215769   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.995285   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.495447   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.995529   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.494898   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.995450   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.494831   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.994880   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:40.097031   50613 kubeadm.go:1081] duration metric: took 12.746819294s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:40.097074   50613 kubeadm.go:406] StartCluster complete in 5m13.552864243s
	I1108 00:18:40.097102   50613 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.097182   50613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:40.099232   50613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.099513   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:40.099522   50613 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:40.099603   50613 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-253253"
	I1108 00:18:40.099612   50613 addons.go:69] Setting default-storageclass=true in profile "embed-certs-253253"
	I1108 00:18:40.099625   50613 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-253253"
	I1108 00:18:40.099626   50613 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-253253"
	W1108 00:18:40.099635   50613 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:40.099675   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.099724   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:40.099769   50613 addons.go:69] Setting metrics-server=true in profile "embed-certs-253253"
	I1108 00:18:40.099783   50613 addons.go:231] Setting addon metrics-server=true in "embed-certs-253253"
	W1108 00:18:40.099791   50613 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:40.099827   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.100063   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100064   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100085   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100086   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100199   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100229   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.117281   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I1108 00:18:40.117806   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.118339   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.118364   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.118717   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.118761   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1108 00:18:40.119093   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.119311   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.119334   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.119497   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.119520   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.119668   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1108 00:18:40.119841   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.119970   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.120403   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.120436   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.120443   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.120456   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.120895   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.121048   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.123728   50613 addons.go:231] Setting addon default-storageclass=true in "embed-certs-253253"
	W1108 00:18:40.123746   50613 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:40.123774   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.124049   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.124073   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.139787   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I1108 00:18:40.140217   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.140776   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.140799   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.141358   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.143152   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I1108 00:18:40.143448   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.144341   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.145156   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.145175   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.145536   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.145695   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.146126   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.146151   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.147863   50613 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:40.149252   50613 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.149270   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:40.149288   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.149701   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41685
	I1108 00:18:40.150096   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.150599   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.150613   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.151053   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.151223   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.152047   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152462   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.152476   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152718   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.152834   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.152927   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.153008   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.153394   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.155041   50613 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:40.156603   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:40.156625   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:40.156642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.159550   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.159952   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.159973   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.160151   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.160294   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.160403   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.160505   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.162863   50613 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-253253" context rescaled to 1 replicas
	I1108 00:18:40.162890   50613 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:40.164733   50613 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:40.166082   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:40.167562   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1108 00:18:40.167938   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.168414   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.168433   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.168805   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.169056   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.170751   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.171377   50613 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.171389   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:40.171402   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.174508   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.174826   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.174859   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.175035   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.175182   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.175341   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.175467   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.387003   50613 node_ready.go:35] waiting up to 6m0s for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.387126   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:40.398413   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:40.398489   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:40.400162   50613 node_ready.go:49] node "embed-certs-253253" has status "Ready":"True"
	I1108 00:18:40.400189   50613 node_ready.go:38] duration metric: took 13.150355ms waiting for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.400204   50613 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:40.416263   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.420346   50613 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:40.441486   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.468701   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:40.468731   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:40.546438   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:40.546475   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:40.620999   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:41.963134   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.575984932s)
	I1108 00:18:41.963222   50613 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:41.963099   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.546802194s)
	I1108 00:18:41.963311   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963342   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.963771   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.963821   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.963843   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963862   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.964176   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.964202   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.964188   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.997903   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.997987   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.998341   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.998428   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.998487   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.447761   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006222409s)
	I1108 00:18:42.447810   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.447824   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.448092   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.448109   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.448110   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.448127   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.448143   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.449994   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.450013   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.450027   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.484250   50613 pod_ready.go:102] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:42.788997   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.167954058s)
	I1108 00:18:42.789042   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789342   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.789395   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789416   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789427   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789673   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789698   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789709   50613 addons.go:467] Verifying addon metrics-server=true in "embed-certs-253253"
	I1108 00:18:42.792162   50613 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:39.563860   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.565166   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:44.063902   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.216274   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:43.717636   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:45.631283   51228 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:45.631354   51228 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:45.631464   51228 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:45.631583   51228 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:45.631736   51228 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:45.631848   51228 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:45.633488   51228 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:45.633579   51228 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:45.633656   51228 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:45.633756   51228 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:45.633840   51228 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:45.633947   51228 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:45.634041   51228 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:45.634140   51228 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:45.634244   51228 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:45.634357   51228 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:45.634458   51228 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:45.634541   51228 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:45.634625   51228 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:45.634713   51228 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:45.634781   51228 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:45.634865   51228 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:45.634935   51228 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:45.635044   51228 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:45.635133   51228 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:45.636666   51228 out.go:204]   - Booting up control plane ...
	I1108 00:18:45.636755   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:45.636862   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:45.636939   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:45.637065   51228 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:45.637164   51228 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:45.637221   51228 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:45.637410   51228 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:45.637479   51228 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005347 seconds
	I1108 00:18:45.637583   51228 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:45.637710   51228 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:45.637782   51228 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:45.637961   51228 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-039263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:45.638007   51228 kubeadm.go:322] [bootstrap-token] Using token: ub1ww5.kh6zrwfrcg8jc9rc
	I1108 00:18:45.639491   51228 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:45.639627   51228 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:45.639743   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:45.639918   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:45.640060   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:45.640240   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:45.640344   51228 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:45.640487   51228 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:45.640546   51228 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:45.640625   51228 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:45.640643   51228 kubeadm.go:322] 
	I1108 00:18:45.640726   51228 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:45.640737   51228 kubeadm.go:322] 
	I1108 00:18:45.640850   51228 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:45.640860   51228 kubeadm.go:322] 
	I1108 00:18:45.640891   51228 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:45.640968   51228 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:45.641042   51228 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:45.641048   51228 kubeadm.go:322] 
	I1108 00:18:45.641124   51228 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:45.641137   51228 kubeadm.go:322] 
	I1108 00:18:45.641193   51228 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:45.641204   51228 kubeadm.go:322] 
	I1108 00:18:45.641266   51228 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:45.641372   51228 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:45.641485   51228 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:45.641493   51228 kubeadm.go:322] 
	I1108 00:18:45.641589   51228 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:45.641704   51228 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:45.641714   51228 kubeadm.go:322] 
	I1108 00:18:45.641815   51228 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.641939   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:45.641971   51228 kubeadm.go:322] 	--control-plane 
	I1108 00:18:45.641979   51228 kubeadm.go:322] 
	I1108 00:18:45.642084   51228 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:45.642093   51228 kubeadm.go:322] 
	I1108 00:18:45.642216   51228 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.642356   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:45.642372   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:18:45.642379   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:45.644712   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:45.646211   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:45.672621   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:45.700061   51228 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:45.700142   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.700153   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=default-k8s-diff-port-039263 minikube.k8s.io/updated_at=2023_11_08T00_18_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.805900   51228 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:42.794167   50613 addons.go:502] enable addons completed in 2.694639707s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:18:44.953906   50613 pod_ready.go:92] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.953928   50613 pod_ready.go:81] duration metric: took 4.533558234s waiting for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.953936   50613 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958854   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.958880   50613 pod_ready.go:81] duration metric: took 4.937561ms waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958892   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964282   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.964305   50613 pod_ready.go:81] duration metric: took 5.40486ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964317   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969544   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.969561   50613 pod_ready.go:81] duration metric: took 5.237377ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969568   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974340   50613 pod_ready.go:92] pod "kube-proxy-shp9z" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.974357   50613 pod_ready.go:81] duration metric: took 4.78369ms waiting for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974367   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350442   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.350465   50613 pod_ready.go:81] duration metric: took 376.091394ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350473   50613 pod_ready.go:38] duration metric: took 4.950259719s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.350487   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.350529   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.366477   50613 api_server.go:72] duration metric: took 5.203563902s to wait for apiserver process to appear ...
	I1108 00:18:45.366502   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.366519   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:18:45.375074   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:18:45.376646   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.376666   50613 api_server.go:131] duration metric: took 10.158963ms to wait for apiserver health ...
	I1108 00:18:45.376674   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.554560   50613 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.554598   50613 system_pods.go:61] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.554605   50613 system_pods.go:61] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.554611   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.554618   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.554624   50613 system_pods.go:61] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.554635   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.554655   50613 system_pods.go:61] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.554697   50613 system_pods.go:61] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.554712   50613 system_pods.go:74] duration metric: took 178.032339ms to wait for pod list to return data ...
	I1108 00:18:45.554722   50613 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.750181   50613 default_sa.go:45] found service account: "default"
	I1108 00:18:45.750210   50613 default_sa.go:55] duration metric: took 195.480878ms for default service account to be created ...
	I1108 00:18:45.750220   50613 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.953261   50613 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.953303   50613 system_pods.go:89] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.953312   50613 system_pods.go:89] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.953320   50613 system_pods.go:89] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.953329   50613 system_pods.go:89] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.953348   50613 system_pods.go:89] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.953360   50613 system_pods.go:89] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.953375   50613 system_pods.go:89] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.953387   50613 system_pods.go:89] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.953402   50613 system_pods.go:126] duration metric: took 203.174777ms to wait for k8s-apps to be running ...
	I1108 00:18:45.953414   50613 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.953471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.969669   50613 system_svc.go:56] duration metric: took 16.24852ms WaitForService to wait for kubelet.
	I1108 00:18:45.969698   50613 kubeadm.go:581] duration metric: took 5.806787278s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.969720   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.150807   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.150839   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.150853   50613 node_conditions.go:105] duration metric: took 181.127043ms to run NodePressure ...
	I1108 00:18:46.150866   50613 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.150876   50613 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.150886   50613 start.go:242] writing updated cluster config ...
	I1108 00:18:46.151185   50613 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.209047   50613 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.211074   50613 out.go:177] * Done! kubectl is now configured to use "embed-certs-253253" cluster and "default" namespace by default
	I1108 00:18:44.564102   50505 pod_ready.go:97] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerS
tateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564132   50505 pod_ready.go:81] duration metric: took 13.91522436s waiting for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:44.564147   50505 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runni
ng:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564158   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573431   50505 pod_ready.go:92] pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.573462   50505 pod_ready.go:81] duration metric: took 9.295648ms waiting for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573473   50505 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580792   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.580828   50505 pod_ready.go:81] duration metric: took 7.346504ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580840   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587095   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.587117   50505 pod_ready.go:81] duration metric: took 6.268891ms waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587130   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594022   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.594039   50505 pod_ready.go:81] duration metric: took 6.901477ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594052   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960144   50505 pod_ready.go:92] pod "kube-proxy-m6k8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.960162   50505 pod_ready.go:81] duration metric: took 366.102529ms waiting for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960173   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361366   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.361388   50505 pod_ready.go:81] duration metric: took 401.208779ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361396   50505 pod_ready.go:38] duration metric: took 14.818791823s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.361408   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.361453   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.377632   50505 api_server.go:72] duration metric: took 15.078013421s to wait for apiserver process to appear ...
	I1108 00:18:45.377656   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.377673   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:18:45.383912   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:18:45.385131   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.385153   50505 api_server.go:131] duration metric: took 7.489916ms to wait for apiserver health ...
	I1108 00:18:45.385163   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.565081   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.565112   50505 system_pods.go:61] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.565120   50505 system_pods.go:61] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.565127   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.565134   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.565141   50505 system_pods.go:61] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.565149   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.565157   50505 system_pods.go:61] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.565171   50505 system_pods.go:61] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.565185   50505 system_pods.go:74] duration metric: took 180.015317ms to wait for pod list to return data ...
	I1108 00:18:45.565196   50505 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.760190   50505 default_sa.go:45] found service account: "default"
	I1108 00:18:45.760217   50505 default_sa.go:55] duration metric: took 195.014175ms for default service account to be created ...
	I1108 00:18:45.760227   50505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.966186   50505 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.966223   50505 system_pods.go:89] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.966231   50505 system_pods.go:89] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.966239   50505 system_pods.go:89] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.966245   50505 system_pods.go:89] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.966252   50505 system_pods.go:89] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.966259   50505 system_pods.go:89] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.966268   50505 system_pods.go:89] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.966279   50505 system_pods.go:89] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.966294   50505 system_pods.go:126] duration metric: took 206.05956ms to wait for k8s-apps to be running ...
	I1108 00:18:45.966305   50505 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.966355   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.984753   50505 system_svc.go:56] duration metric: took 18.427005ms WaitForService to wait for kubelet.
	I1108 00:18:45.984781   50505 kubeadm.go:581] duration metric: took 15.685164805s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.984803   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.159568   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.159602   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.159615   50505 node_conditions.go:105] duration metric: took 174.805156ms to run NodePressure ...
	I1108 00:18:46.159627   50505 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.159636   50505 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.159649   50505 start.go:242] writing updated cluster config ...
	I1108 00:18:46.159934   50505 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.220234   50505 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.222217   50505 out.go:177] * Done! kubectl is now configured to use "no-preload-320390" cluster and "default" namespace by default
	I1108 00:18:46.222047   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:48.714709   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:46.109921   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.223968   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.849987   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.349982   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.850871   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.350081   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.850494   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.350809   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.850515   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.350227   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.850044   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.714976   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:53.214612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:51.350594   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:51.850705   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.349971   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.850530   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.350696   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.850039   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.350523   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.849805   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.350560   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.849890   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.350679   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.849863   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.350004   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.850463   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.349999   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.850810   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.958213   51228 kubeadm.go:1081] duration metric: took 13.258132625s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:58.958253   51228 kubeadm.go:406] StartCluster complete in 5m8.559036824s
	I1108 00:18:58.958281   51228 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.958371   51228 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:58.960083   51228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.960306   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:58.960417   51228 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:58.960497   51228 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960505   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:58.960517   51228 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960544   51228 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-039263"
	I1108 00:18:58.960521   51228 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-039263"
	I1108 00:18:58.960538   51228 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960588   51228 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.960607   51228 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:58.960654   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	W1108 00:18:58.960566   51228 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:58.960732   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.961043   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961079   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961112   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961115   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961155   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961164   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.980365   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I1108 00:18:58.980386   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I1108 00:18:58.980512   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I1108 00:18:58.980860   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980912   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980863   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.981328   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981350   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981466   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981477   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981483   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981863   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.982023   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:58.982419   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982429   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982447   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.982464   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.985852   51228 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.985875   51228 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:58.985902   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.986359   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.986390   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.996161   51228 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-039263" context rescaled to 1 replicas
	I1108 00:18:58.996200   51228 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:58.998257   51228 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:58.999857   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:58.999917   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I1108 00:18:58.998777   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1108 00:18:59.000380   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001040   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001093   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001205   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001478   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.001674   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001690   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001762   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.002038   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.002209   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.003822   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006057   51228 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:59.004254   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006174   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I1108 00:18:59.007678   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:59.007688   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:59.007706   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.009545   51228 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:55.714548   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:57.715173   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:59.007989   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.010470   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.010632   51228 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.010640   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:59.010653   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.011015   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.011039   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.011227   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.011250   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.011650   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.011657   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.012158   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:59.012188   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:59.012671   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.012805   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.012925   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.013938   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014329   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.014348   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014493   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.014645   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.014770   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.014879   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.030160   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I1108 00:18:59.030558   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.031087   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.031101   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.031353   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.031558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.033203   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.033540   51228 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.033556   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:59.033573   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.036749   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.037177   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.037551   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.037684   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.037791   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.349254   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.451588   51228 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.451664   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:59.464584   51228 node_ready.go:49] node "default-k8s-diff-port-039263" has status "Ready":"True"
	I1108 00:18:59.464616   51228 node_ready.go:38] duration metric: took 12.97792ms waiting for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.464629   51228 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:59.475428   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:59.481740   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.483627   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:59.483644   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:59.599214   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:59.599244   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:59.661512   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:59.661537   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:59.726775   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:01.455332   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.003642063s)
	I1108 00:19:01.455368   51228 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1108 00:19:01.455575   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.106281369s)
	I1108 00:19:01.455635   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.455659   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.455957   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456004   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456026   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.456048   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.456296   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456332   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456339   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.485941   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.485970   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.486229   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.486287   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.486294   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.599500   51228 pod_ready.go:102] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:01.893463   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.411687372s)
	I1108 00:19:01.893518   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893530   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.893844   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.893887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.893904   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.893918   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893928   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.894199   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.894215   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.421714   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694889947s)
	I1108 00:19:02.421768   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.421785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422098   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422123   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422141   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.422160   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422138   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422467   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422480   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422492   51228 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-039263"
	I1108 00:19:02.424446   51228 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:59.715708   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.214990   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.426041   51228 addons.go:502] enable addons completed in 3.465624772s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:19:02.549025   51228 pod_ready.go:97] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:
2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549056   51228 pod_ready.go:81] duration metric: took 3.073604936s waiting for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:02.549069   51228 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&Conta
inerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549076   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096421   51228 pod_ready.go:92] pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.096449   51228 pod_ready.go:81] duration metric: took 547.365037ms waiting for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096461   51228 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104473   51228 pod_ready.go:92] pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.104497   51228 pod_ready.go:81] duration metric: took 8.028055ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104509   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108940   51228 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.108965   51228 pod_ready.go:81] duration metric: took 4.447315ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108976   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458803   51228 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.458831   51228 pod_ready.go:81] duration metric: took 349.845574ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458844   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256435   51228 pod_ready.go:92] pod "kube-proxy-rhdhg" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.256457   51228 pod_ready.go:81] duration metric: took 797.605956ms waiting for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256466   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655727   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.655750   51228 pod_ready.go:81] duration metric: took 399.277263ms waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655758   51228 pod_ready.go:38] duration metric: took 5.191103655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:04.655772   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:19:04.655823   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:19:04.671030   51228 api_server.go:72] duration metric: took 5.674798555s to wait for apiserver process to appear ...
	I1108 00:19:04.671059   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:19:04.671076   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:19:04.677315   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:19:04.678430   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:19:04.678451   51228 api_server.go:131] duration metric: took 7.384898ms to wait for apiserver health ...
	I1108 00:19:04.678457   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:19:04.866585   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:19:04.866617   51228 system_pods.go:61] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:04.866622   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:04.866626   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:04.866631   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:04.866635   51228 system_pods.go:61] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:04.866639   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:04.866666   51228 system_pods.go:61] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:04.866676   51228 system_pods.go:61] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:04.866684   51228 system_pods.go:74] duration metric: took 188.222131ms to wait for pod list to return data ...
	I1108 00:19:04.866691   51228 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:19:05.056224   51228 default_sa.go:45] found service account: "default"
	I1108 00:19:05.056251   51228 default_sa.go:55] duration metric: took 189.551289ms for default service account to be created ...
	I1108 00:19:05.056263   51228 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:19:05.259774   51228 system_pods.go:86] 8 kube-system pods found
	I1108 00:19:05.259800   51228 system_pods.go:89] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:05.259805   51228 system_pods.go:89] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:05.259810   51228 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:05.259814   51228 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:05.259818   51228 system_pods.go:89] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:05.259822   51228 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:05.259828   51228 system_pods.go:89] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:05.259832   51228 system_pods.go:89] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:05.259840   51228 system_pods.go:126] duration metric: took 203.572791ms to wait for k8s-apps to be running ...
	I1108 00:19:05.259846   51228 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:19:05.259889   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:05.274254   51228 system_svc.go:56] duration metric: took 14.400341ms WaitForService to wait for kubelet.
	I1108 00:19:05.274277   51228 kubeadm.go:581] duration metric: took 6.278053459s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:19:05.274304   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:19:05.457057   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:19:05.457086   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:19:05.457097   51228 node_conditions.go:105] duration metric: took 182.787127ms to run NodePressure ...
	I1108 00:19:05.457107   51228 start.go:228] waiting for startup goroutines ...
	I1108 00:19:05.457113   51228 start.go:233] waiting for cluster config update ...
	I1108 00:19:05.457122   51228 start.go:242] writing updated cluster config ...
	I1108 00:19:05.457358   51228 ssh_runner.go:195] Run: rm -f paused
	I1108 00:19:05.507414   51228 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:19:05.509695   51228 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-039263" cluster and "default" namespace by default
	I1108 00:19:04.715259   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:07.214815   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:09.214886   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:11.715679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:14.215690   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:16.716315   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:19.215323   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:21.715872   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:24.215543   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:26.409609   50022 pod_ready.go:81] duration metric: took 4m0.000552573s waiting for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:26.409644   50022 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:19:26.409659   50022 pod_ready.go:38] duration metric: took 4m1.201158343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:26.409684   50022 kubeadm.go:640] restartCluster took 5m11.212754497s
	W1108 00:19:26.409757   50022 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:19:26.409790   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:19:31.401367   50022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.991549602s)
	I1108 00:19:31.401473   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:31.415823   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:19:31.425384   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:19:31.435585   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:19:31.435635   50022 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1108 00:19:31.492015   50022 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1108 00:19:31.492120   50022 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:19:31.649293   50022 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:19:31.649437   50022 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:19:31.649605   50022 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:19:31.886799   50022 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:19:31.886955   50022 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:19:31.896062   50022 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1108 00:19:32.038269   50022 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:19:32.040677   50022 out.go:204]   - Generating certificates and keys ...
	I1108 00:19:32.040833   50022 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:19:32.040945   50022 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:19:32.041037   50022 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:19:32.041085   50022 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:19:32.041142   50022 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:19:32.041231   50022 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:19:32.041346   50022 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:19:32.041441   50022 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:19:32.041594   50022 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:19:32.042173   50022 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:19:32.042236   50022 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:19:32.042302   50022 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:19:32.325005   50022 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:19:32.544755   50022 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:19:32.726539   50022 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:19:32.905403   50022 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:19:32.906525   50022 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:19:32.908371   50022 out.go:204]   - Booting up control plane ...
	I1108 00:19:32.908514   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:19:32.919163   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:19:32.919256   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:19:32.919387   50022 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:19:32.928261   50022 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:19:42.937037   50022 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.006146 seconds
	I1108 00:19:42.937215   50022 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:19:42.955795   50022 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:19:43.479726   50022 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:19:43.479868   50022 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-590541 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1108 00:19:43.989897   50022 kubeadm.go:322] [bootstrap-token] Using token: rpiq38.6eoemv6ygv6ghnel
	I1108 00:19:43.991262   50022 out.go:204]   - Configuring RBAC rules ...
	I1108 00:19:43.991391   50022 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:19:44.001502   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:19:44.006931   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:19:44.012505   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:19:44.021422   50022 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:19:44.111517   50022 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:19:44.412934   50022 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:19:44.412985   50022 kubeadm.go:322] 
	I1108 00:19:44.413073   50022 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:19:44.413088   50022 kubeadm.go:322] 
	I1108 00:19:44.413186   50022 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:19:44.413196   50022 kubeadm.go:322] 
	I1108 00:19:44.413230   50022 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:19:44.413317   50022 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:19:44.413388   50022 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:19:44.413398   50022 kubeadm.go:322] 
	I1108 00:19:44.413489   50022 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:19:44.413608   50022 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:19:44.413704   50022 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:19:44.413720   50022 kubeadm.go:322] 
	I1108 00:19:44.413851   50022 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1108 00:19:44.413974   50022 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:19:44.413988   50022 kubeadm.go:322] 
	I1108 00:19:44.414090   50022 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414288   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:19:44.414337   50022 kubeadm.go:322]     --control-plane 	  
	I1108 00:19:44.414347   50022 kubeadm.go:322] 
	I1108 00:19:44.414458   50022 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:19:44.414474   50022 kubeadm.go:322] 
	I1108 00:19:44.414593   50022 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414754   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:19:44.416038   50022 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:19:44.416063   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:19:44.416073   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:19:44.417877   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:19:44.419195   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:19:44.448380   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:19:44.474228   50022 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:19:44.474339   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.474380   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=old-k8s-version-590541 minikube.k8s.io/updated_at=2023_11_08T00_19_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.739449   50022 ops.go:34] apiserver oom_adj: -16
	I1108 00:19:44.739605   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.848712   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.444347   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.944721   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.444140   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.944185   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.444342   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.944227   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.443941   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.944002   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.444440   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.943801   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.444481   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.944720   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.443857   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.943755   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.444663   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.944052   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.443917   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.943763   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.443886   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.944615   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.444156   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.944693   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.443823   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.944727   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.444188   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.943966   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.444659   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.944651   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:59.061808   50022 kubeadm.go:1081] duration metric: took 14.587519972s to wait for elevateKubeSystemPrivileges.
	I1108 00:19:59.061855   50022 kubeadm.go:406] StartCluster complete in 5m43.925088245s
	I1108 00:19:59.061878   50022 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.061962   50022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:19:59.063740   50022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.064004   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:19:59.064107   50022 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:19:59.064182   50022 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064198   50022 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064213   50022 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-590541"
	W1108 00:19:59.064222   50022 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:19:59.064224   50022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-590541"
	I1108 00:19:59.064233   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:19:59.064236   50022 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064260   50022 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:19:59.064265   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	W1108 00:19:59.064274   50022 addons.go:240] addon metrics-server should already be in state true
	I1108 00:19:59.064406   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.064720   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064757   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064761   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.064797   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.065271   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.065309   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.082041   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
	I1108 00:19:59.082534   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.083051   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.083075   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.083432   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.083970   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.084022   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.084099   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I1108 00:19:59.084222   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I1108 00:19:59.084440   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084605   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084870   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.084887   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085151   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.085174   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085248   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.085427   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.085480   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.086399   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.086442   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.090677   50022 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-590541"
	W1108 00:19:59.090700   50022 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:19:59.090728   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.091092   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.091130   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.101788   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I1108 00:19:59.102208   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.102631   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.102648   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.103029   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.103219   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.104809   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I1108 00:19:59.104937   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.106844   50022 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:19:59.105475   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.108350   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:19:59.108374   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:19:59.108403   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.108551   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I1108 00:19:59.108910   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.108930   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.109878   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.109881   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.110039   50022 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-590541" context rescaled to 1 replicas
	I1108 00:19:59.110075   50022 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:19:59.111637   50022 out.go:177] * Verifying Kubernetes components...
	I1108 00:19:59.110208   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.110398   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.113108   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.113220   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:59.113743   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.113792   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.114471   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.114510   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.115179   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.117011   50022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:19:59.115897   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.116172   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.118325   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.118358   50022 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.118370   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:19:59.118383   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.118504   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.118696   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.118854   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.120889   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121255   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.121280   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121465   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.121647   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.121783   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.121868   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.135569   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I1108 00:19:59.135977   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.136428   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.136441   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.136799   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.137027   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.138503   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.138735   50022 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.138745   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:19:59.138758   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.141494   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.141870   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.141895   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.142046   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.142248   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.142370   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.142592   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.281321   50022 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.281572   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:19:59.284783   50022 node_ready.go:49] node "old-k8s-version-590541" has status "Ready":"True"
	I1108 00:19:59.284804   50022 node_ready.go:38] duration metric: took 3.444344ms waiting for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.284830   50022 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:59.290322   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:59.290908   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:19:59.290925   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:19:59.311485   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.346809   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.350361   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:19:59.350385   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:19:59.403305   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:59.403328   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:19:59.479823   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:20:00.224554   50022 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1108 00:20:00.659427   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.347903115s)
	I1108 00:20:00.659441   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.312604515s)
	I1108 00:20:00.659501   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659533   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659536   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659549   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659834   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.659857   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.659867   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659876   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659933   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.659981   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660022   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660051   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.660062   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.660131   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.660242   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660254   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660300   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660321   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.851614   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.851637   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.851930   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.851996   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.852027   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992341   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.5124613s)
	I1108 00:20:00.992412   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992429   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.992774   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.992811   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.992830   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992841   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992854   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.993100   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.993122   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.993162   50022 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:20:00.995051   50022 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:20:00.996839   50022 addons.go:502] enable addons completed in 1.932740124s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:20:01.324759   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:03.823744   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:06.322994   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:08.822755   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:10.823247   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:12.819017   50022 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819052   50022 pod_ready.go:81] duration metric: took 13.528699598s waiting for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	E1108 00:20:12.819067   50022 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819075   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825970   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.825988   50022 pod_ready.go:81] duration metric: took 6.906077ms waiting for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825996   50022 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830826   50022 pod_ready.go:92] pod "kube-proxy-p27g4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.830843   50022 pod_ready.go:81] duration metric: took 4.841517ms waiting for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830852   50022 pod_ready.go:38] duration metric: took 13.54601076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:20:12.830866   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:20:12.830909   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:20:12.849600   50022 api_server.go:72] duration metric: took 13.739491815s to wait for apiserver process to appear ...
	I1108 00:20:12.849634   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:20:12.849653   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:20:12.856740   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:20:12.857940   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:20:12.857960   50022 api_server.go:131] duration metric: took 8.319568ms to wait for apiserver health ...
	I1108 00:20:12.857967   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:20:12.862192   50022 system_pods.go:59] 4 kube-system pods found
	I1108 00:20:12.862217   50022 system_pods.go:61] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.862222   50022 system_pods.go:61] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.862230   50022 system_pods.go:61] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.862239   50022 system_pods.go:61] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.862248   50022 system_pods.go:74] duration metric: took 4.275078ms to wait for pod list to return data ...
	I1108 00:20:12.862257   50022 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:20:12.867018   50022 default_sa.go:45] found service account: "default"
	I1108 00:20:12.867043   50022 default_sa.go:55] duration metric: took 4.778337ms for default service account to be created ...
	I1108 00:20:12.867052   50022 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:20:12.871638   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:12.871664   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.871671   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.871682   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.871688   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.871706   50022 retry.go:31] will retry after 307.408821ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.184897   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.184927   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.184944   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.184954   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.184963   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.184984   50022 retry.go:31] will retry after 301.786347ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.492026   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.492053   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.492058   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.492065   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.492070   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.492085   50022 retry.go:31] will retry after 396.219719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.893320   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.893348   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.893356   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.893366   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.893372   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.893390   50022 retry.go:31] will retry after 592.540002ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:14.490613   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:14.490638   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:14.490644   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:14.490651   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:14.490655   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:14.490670   50022 retry.go:31] will retry after 512.19038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.008506   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.008533   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.008539   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.008545   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.008586   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.008606   50022 retry.go:31] will retry after 704.779032ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.719115   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.719140   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.719145   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.719152   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.719156   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.719174   50022 retry.go:31] will retry after 892.457504ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:16.616738   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:16.616764   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:16.616770   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:16.616776   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:16.616781   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:16.616795   50022 retry.go:31] will retry after 1.107800827s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:17.729962   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:17.729989   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:17.729997   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:17.730007   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:17.730014   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:17.730032   50022 retry.go:31] will retry after 1.24176205s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:18.976866   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:18.976891   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:18.976897   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:18.976905   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:18.976910   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:18.976925   50022 retry.go:31] will retry after 1.449825188s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:20.432723   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:20.432753   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:20.432760   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:20.432770   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:20.432776   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:20.432796   50022 retry.go:31] will retry after 1.764186569s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:22.202432   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:22.202465   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:22.202473   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:22.202484   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:22.202491   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:22.202522   50022 retry.go:31] will retry after 3.392893976s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:25.600685   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:25.600712   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:25.600717   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:25.600723   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:25.600728   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:25.600743   50022 retry.go:31] will retry after 3.537590817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:29.143439   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:29.143464   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:29.143468   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:29.143475   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:29.143482   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:29.143502   50022 retry.go:31] will retry after 3.82527374s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:32.973763   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:32.973796   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:32.973804   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:32.973814   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:32.973821   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:32.973840   50022 retry.go:31] will retry after 6.225201923s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:39.204648   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:39.204682   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:39.204690   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:39.204702   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:39.204710   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:39.204729   50022 retry.go:31] will retry after 7.177772259s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:46.388992   50022 system_pods.go:86] 5 kube-system pods found
	I1108 00:20:46.389016   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:46.389022   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Pending
	I1108 00:20:46.389025   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:46.389032   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:46.389037   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:46.389052   50022 retry.go:31] will retry after 8.995080935s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:55.391202   50022 system_pods.go:86] 7 kube-system pods found
	I1108 00:20:55.391228   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:55.391233   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:20:55.391237   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:20:55.391241   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:55.391245   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Pending
	I1108 00:20:55.391252   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:55.391256   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:55.391272   50022 retry.go:31] will retry after 10.028239262s: missing components: kube-controller-manager, kube-scheduler
	I1108 00:21:05.426292   50022 system_pods.go:86] 8 kube-system pods found
	I1108 00:21:05.426317   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:21:05.426323   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:21:05.426327   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:21:05.426331   50022 system_pods.go:89] "kube-controller-manager-old-k8s-version-590541" [90563d50-3d48-4256-ae70-82a2a6d1c251] Running
	I1108 00:21:05.426335   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:21:05.426339   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Running
	I1108 00:21:05.426345   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:21:05.426349   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:21:05.426356   50022 system_pods.go:126] duration metric: took 52.559298515s to wait for k8s-apps to be running ...
	I1108 00:21:05.426363   50022 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:21:05.426403   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:21:05.443281   50022 system_svc.go:56] duration metric: took 16.903571ms WaitForService to wait for kubelet.
	I1108 00:21:05.443315   50022 kubeadm.go:581] duration metric: took 1m6.333213694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:21:05.443337   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:21:05.447040   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:21:05.447064   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:21:05.447074   50022 node_conditions.go:105] duration metric: took 3.731788ms to run NodePressure ...
	I1108 00:21:05.447083   50022 start.go:228] waiting for startup goroutines ...
	I1108 00:21:05.447089   50022 start.go:233] waiting for cluster config update ...
	I1108 00:21:05.447098   50022 start.go:242] writing updated cluster config ...
	I1108 00:21:05.447409   50022 ssh_runner.go:195] Run: rm -f paused
	I1108 00:21:05.496203   50022 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1108 00:21:05.498233   50022 out.go:177] 
	W1108 00:21:05.499660   50022 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1108 00:21:05.500985   50022 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1108 00:21:05.502464   50022 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-590541" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-08 00:12:52 UTC, ends at Wed 2023-11-08 00:27:48 UTC. --
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.124796449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e6aa53ba-379d-468f-a00a-06aa9f5b2acb name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.125095460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729,PodSandboxId:2a22830dc4b11ebe174d391e51d48e317426101abae8af821ca364240146aa86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699402714755052561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdba396c-182a-4bef-8ccb-2275534d89c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef424d44,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a,PodSandboxId:8f9d54f627ac9cf4a6a158bd59974782c391c94abf0cbac4a88992ab90057fb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699402714532925842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6k8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b019bf-527c-4265-a67c-31e6cf377039,},Annotations:map[string]string{io.kubernetes.container.hash: 2cbb9000,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba,PodSandboxId:c8537f902f5b485b7f8dd3a7b90c5a4fda375f2774c608d7fe9fd206b97c01ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699402713370700547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vl7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6d5125-ebac-4931-9af7-045d1c4ba2b1,},Annotations:map[string]string{io.kubernetes.container.hash: e6be1849,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a,PodSandboxId:bfd143469feb56623caea7b93a30b284d3103b7754676c9795e8aece29b963ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699402690346299680,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8e9a6ea75c1f836169baf57b947fb963,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7,PodSandboxId:d42814db488e141657413d1b4ebe453ae8e872571e5ef6efff0f41641b0ae9d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699402689852347424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d317eb0edde6b082ddeb87a0edd3fd,},Annotations:map
[string]string{io.kubernetes.container.hash: 941977ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b,PodSandboxId:67f3c4ee09cc7d810051e7aed7a9e2d08ce87c234c06f01ae8e86c204fdb2070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699402689518686511,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0f26b2571b2956b1d2260c
a7e78ae,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe,PodSandboxId:b7eeef6985dd20728e93f2bffb2d5ee0d9bcc5bdf31acdf2b51f2dec48e4228e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699402689555811674,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8a3996624e70e2d7824097f608acdb,},A
nnotations:map[string]string{io.kubernetes.container.hash: 6d2b62dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e6aa53ba-379d-468f-a00a-06aa9f5b2acb name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.173088889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3996df63-86ec-49eb-9365-325b2b323f87 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.173262834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3996df63-86ec-49eb-9365-325b2b323f87 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.175839294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3b030543-0b9c-4351-8401-e446ec130944 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.176423799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403268176399075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=3b030543-0b9c-4351-8401-e446ec130944 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.177065978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e7dc762e-2608-48a5-abed-f4dbdfac23ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.177112190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e7dc762e-2608-48a5-abed-f4dbdfac23ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.177386712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729,PodSandboxId:2a22830dc4b11ebe174d391e51d48e317426101abae8af821ca364240146aa86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699402714755052561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdba396c-182a-4bef-8ccb-2275534d89c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef424d44,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a,PodSandboxId:8f9d54f627ac9cf4a6a158bd59974782c391c94abf0cbac4a88992ab90057fb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699402714532925842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6k8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b019bf-527c-4265-a67c-31e6cf377039,},Annotations:map[string]string{io.kubernetes.container.hash: 2cbb9000,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba,PodSandboxId:c8537f902f5b485b7f8dd3a7b90c5a4fda375f2774c608d7fe9fd206b97c01ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699402713370700547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vl7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6d5125-ebac-4931-9af7-045d1c4ba2b1,},Annotations:map[string]string{io.kubernetes.container.hash: e6be1849,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a,PodSandboxId:bfd143469feb56623caea7b93a30b284d3103b7754676c9795e8aece29b963ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699402690346299680,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8e9a6ea75c1f836169baf57b947fb963,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7,PodSandboxId:d42814db488e141657413d1b4ebe453ae8e872571e5ef6efff0f41641b0ae9d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699402689852347424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d317eb0edde6b082ddeb87a0edd3fd,},Annotations:map
[string]string{io.kubernetes.container.hash: 941977ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b,PodSandboxId:67f3c4ee09cc7d810051e7aed7a9e2d08ce87c234c06f01ae8e86c204fdb2070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699402689518686511,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0f26b2571b2956b1d2260c
a7e78ae,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe,PodSandboxId:b7eeef6985dd20728e93f2bffb2d5ee0d9bcc5bdf31acdf2b51f2dec48e4228e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699402689555811674,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8a3996624e70e2d7824097f608acdb,},A
nnotations:map[string]string{io.kubernetes.container.hash: 6d2b62dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7dc762e-2608-48a5-abed-f4dbdfac23ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.227018305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d52f3c54-1ccf-42b9-9215-3c15b91b1a9f name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.227100449Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d52f3c54-1ccf-42b9-9215-3c15b91b1a9f name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.228816168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=45c0f3f4-3c17-47ac-8a03-880456857916 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.229408971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403268229388050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=45c0f3f4-3c17-47ac-8a03-880456857916 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.230204679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=66b6010f-db26-47c0-8153-1f719dff214d name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.230517833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=66b6010f-db26-47c0-8153-1f719dff214d name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.230813922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729,PodSandboxId:2a22830dc4b11ebe174d391e51d48e317426101abae8af821ca364240146aa86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699402714755052561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdba396c-182a-4bef-8ccb-2275534d89c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef424d44,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a,PodSandboxId:8f9d54f627ac9cf4a6a158bd59974782c391c94abf0cbac4a88992ab90057fb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699402714532925842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6k8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b019bf-527c-4265-a67c-31e6cf377039,},Annotations:map[string]string{io.kubernetes.container.hash: 2cbb9000,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba,PodSandboxId:c8537f902f5b485b7f8dd3a7b90c5a4fda375f2774c608d7fe9fd206b97c01ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699402713370700547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vl7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6d5125-ebac-4931-9af7-045d1c4ba2b1,},Annotations:map[string]string{io.kubernetes.container.hash: e6be1849,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a,PodSandboxId:bfd143469feb56623caea7b93a30b284d3103b7754676c9795e8aece29b963ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699402690346299680,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8e9a6ea75c1f836169baf57b947fb963,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7,PodSandboxId:d42814db488e141657413d1b4ebe453ae8e872571e5ef6efff0f41641b0ae9d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699402689852347424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d317eb0edde6b082ddeb87a0edd3fd,},Annotations:map
[string]string{io.kubernetes.container.hash: 941977ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b,PodSandboxId:67f3c4ee09cc7d810051e7aed7a9e2d08ce87c234c06f01ae8e86c204fdb2070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699402689518686511,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0f26b2571b2956b1d2260c
a7e78ae,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe,PodSandboxId:b7eeef6985dd20728e93f2bffb2d5ee0d9bcc5bdf31acdf2b51f2dec48e4228e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699402689555811674,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8a3996624e70e2d7824097f608acdb,},A
nnotations:map[string]string{io.kubernetes.container.hash: 6d2b62dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=66b6010f-db26-47c0-8153-1f719dff214d name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.263748352Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=ae5f3607-b1f6-4281-8af9-66bbc832ded8 name=/runtime.v1.RuntimeService/Status
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.263842738Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=ae5f3607-b1f6-4281-8af9-66bbc832ded8 name=/runtime.v1.RuntimeService/Status
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.274968402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d42d129b-f297-4823-80c0-e5d486f3263d name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.275080455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d42d129b-f297-4823-80c0-e5d486f3263d name=/runtime.v1.RuntimeService/Version
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.276650221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=871957d7-3589-4226-bf76-582392159c9d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.277266539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403268277244801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=871957d7-3589-4226-bf76-582392159c9d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.278419925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=623b36dc-5734-42ac-a599-c9796815295f name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.278481949Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=623b36dc-5734-42ac-a599-c9796815295f name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:27:48 no-preload-320390 crio[713]: time="2023-11-08 00:27:48.278707289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729,PodSandboxId:2a22830dc4b11ebe174d391e51d48e317426101abae8af821ca364240146aa86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699402714755052561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdba396c-182a-4bef-8ccb-2275534d89c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef424d44,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a,PodSandboxId:8f9d54f627ac9cf4a6a158bd59974782c391c94abf0cbac4a88992ab90057fb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699402714532925842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6k8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b019bf-527c-4265-a67c-31e6cf377039,},Annotations:map[string]string{io.kubernetes.container.hash: 2cbb9000,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba,PodSandboxId:c8537f902f5b485b7f8dd3a7b90c5a4fda375f2774c608d7fe9fd206b97c01ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699402713370700547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vl7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6d5125-ebac-4931-9af7-045d1c4ba2b1,},Annotations:map[string]string{io.kubernetes.container.hash: e6be1849,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a,PodSandboxId:bfd143469feb56623caea7b93a30b284d3103b7754676c9795e8aece29b963ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699402690346299680,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8e9a6ea75c1f836169baf57b947fb963,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7,PodSandboxId:d42814db488e141657413d1b4ebe453ae8e872571e5ef6efff0f41641b0ae9d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699402689852347424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d317eb0edde6b082ddeb87a0edd3fd,},Annotations:map
[string]string{io.kubernetes.container.hash: 941977ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b,PodSandboxId:67f3c4ee09cc7d810051e7aed7a9e2d08ce87c234c06f01ae8e86c204fdb2070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699402689518686511,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0f26b2571b2956b1d2260c
a7e78ae,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe,PodSandboxId:b7eeef6985dd20728e93f2bffb2d5ee0d9bcc5bdf31acdf2b51f2dec48e4228e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699402689555811674,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8a3996624e70e2d7824097f608acdb,},A
nnotations:map[string]string{io.kubernetes.container.hash: 6d2b62dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=623b36dc-5734-42ac-a599-c9796815295f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	89294275812d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2a22830dc4b11       storage-provisioner
	c34465a005584       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   9 minutes ago       Running             kube-proxy                0                   8f9d54f627ac9       kube-proxy-m6k8g
	52ea18eeebb99       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   c8537f902f5b4       coredns-5dd5756b68-vl7nr
	a2b9790aba3f6       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   9 minutes ago       Running             kube-scheduler            2                   bfd143469feb5       kube-scheduler-no-preload-320390
	d47be6e9b0407       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   d42814db488e1       etcd-no-preload-320390
	7d181d8164e69       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   9 minutes ago       Running             kube-apiserver            2                   b7eeef6985dd2       kube-apiserver-no-preload-320390
	3b1c3ebbbf66c       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   9 minutes ago       Running             kube-controller-manager   2                   67f3c4ee09cc7       kube-controller-manager-no-preload-320390
	
	* 
	* ==> coredns [52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37451 - 60939 "HINFO IN 6423122248177977238.1283848085502843503. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020976607s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-320390
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-320390
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=no-preload-320390
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T00_18_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 00:18:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-320390
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 00:27:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:23:44 +0000   Wed, 08 Nov 2023 00:18:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:23:44 +0000   Wed, 08 Nov 2023 00:18:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:23:44 +0000   Wed, 08 Nov 2023 00:18:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:23:44 +0000   Wed, 08 Nov 2023 00:18:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.176
	  Hostname:    no-preload-320390
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c178cf512c54d4fb9fcd7bd180751f4
	  System UUID:                8c178cf5-12c5-4d4f-b9fc-d7bd180751f4
	  Boot ID:                    8f17c187-089a-41df-a272-f9c7d1be0d14
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-vl7nr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m18s
	  kube-system                 etcd-no-preload-320390                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-apiserver-no-preload-320390             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 kube-controller-manager-no-preload-320390    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m33s
	  kube-system                 kube-proxy-m6k8g                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-scheduler-no-preload-320390             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 metrics-server-57f55c9bc5-n49bz              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m15s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m40s (x8 over 9m40s)  kubelet          Node no-preload-320390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m40s (x8 over 9m40s)  kubelet          Node no-preload-320390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m40s (x7 over 9m40s)  kubelet          Node no-preload-320390 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m31s                  kubelet          Node no-preload-320390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s                  kubelet          Node no-preload-320390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s                  kubelet          Node no-preload-320390 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m31s                  kubelet          Node no-preload-320390 status is now: NodeNotReady
	  Normal  Starting                 9m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m30s                  kubelet          Node no-preload-320390 status is now: NodeReady
	  Normal  RegisteredNode           9m19s                  node-controller  Node no-preload-320390 event: Registered Node no-preload-320390 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 8 00:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068367] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.325854] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.497505] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141095] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.439782] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.474729] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.110954] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.147908] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.096650] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.238061] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Nov 8 00:13] systemd-fstab-generator[1270]: Ignoring "noauto" for root device
	[ +19.445945] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 8 00:18] systemd-fstab-generator[3913]: Ignoring "noauto" for root device
	[  +9.797379] systemd-fstab-generator[4238]: Ignoring "noauto" for root device
	[ +13.468382] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.153644] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7] <==
	* {"level":"info","ts":"2023-11-08T00:18:11.439777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a switched to configuration voters=(357180144389535578)"}
	{"level":"info","ts":"2023-11-08T00:18:11.439886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"310df9cc729b3e75","local-member-id":"4f4f572eb29375a","added-peer-id":"4f4f572eb29375a","added-peer-peer-urls":["https://192.168.61.176:2380"]}
	{"level":"info","ts":"2023-11-08T00:18:11.462381Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-08T00:18:11.46294Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.176:2380"}
	{"level":"info","ts":"2023-11-08T00:18:11.463036Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.176:2380"}
	{"level":"info","ts":"2023-11-08T00:18:11.464402Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-08T00:18:11.464333Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4f4f572eb29375a","initial-advertise-peer-urls":["https://192.168.61.176:2380"],"listen-peer-urls":["https://192.168.61.176:2380"],"advertise-client-urls":["https://192.168.61.176:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.176:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-08T00:18:11.56537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-08T00:18:11.566229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-08T00:18:11.566483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a received MsgPreVoteResp from 4f4f572eb29375a at term 1"}
	{"level":"info","ts":"2023-11-08T00:18:11.568226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a became candidate at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:11.56829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a received MsgVoteResp from 4f4f572eb29375a at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:11.568334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f4f572eb29375a became leader at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:11.568369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4f4f572eb29375a elected leader 4f4f572eb29375a at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:11.573238Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:11.577399Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4f4f572eb29375a","local-member-attributes":"{Name:no-preload-320390 ClientURLs:[https://192.168.61.176:2379]}","request-path":"/0/members/4f4f572eb29375a/attributes","cluster-id":"310df9cc729b3e75","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T00:18:11.577468Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:11.584762Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T00:18:11.593203Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"310df9cc729b3e75","local-member-id":"4f4f572eb29375a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:11.593382Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:11.593424Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:11.59344Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:11.600564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.176:2379"}
	{"level":"info","ts":"2023-11-08T00:18:11.603304Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:11.603454Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:27:48 up 15 min,  0 users,  load average: 0.16, 0.33, 0.31
	Linux no-preload-320390 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe] <==
	* W1108 00:23:15.365564       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:23:15.365636       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:23:15.365645       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:23:15.365704       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:23:15.365840       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:23:15.367223       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:24:14.223020       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:24:15.366643       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:24:15.366928       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:24:15.366971       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:24:15.368092       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:24:15.368286       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:24:15.368319       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:25:14.222482       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1108 00:26:14.222366       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:26:15.367845       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:26:15.367933       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:26:15.367941       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:26:15.369269       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:26:15.369384       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:26:15.369423       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:27:14.222861       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b] <==
	* I1108 00:22:00.457483       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:22:30.001700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:22:30.471772       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:23:00.008983       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:23:00.483690       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:23:30.016835       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:23:30.495549       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:24:00.023868       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:24:00.505830       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:24:22.866731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="654.52µs"
	E1108 00:24:30.029762       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:24:30.515432       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:24:34.864493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="219.828µs"
	E1108 00:25:00.034956       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:25:00.526289       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:25:30.046897       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:25:30.536332       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:26:00.052429       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:26:00.545367       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:26:30.058441       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:26:30.554075       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:27:00.064431       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:27:00.564929       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:27:30.071538       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:27:30.575946       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a] <==
	* I1108 00:18:34.883939       1 server_others.go:69] "Using iptables proxy"
	I1108 00:18:34.926037       1 node.go:141] Successfully retrieved node IP: 192.168.61.176
	I1108 00:18:35.028087       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 00:18:35.028239       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 00:18:35.082455       1 server_others.go:152] "Using iptables Proxier"
	I1108 00:18:35.082582       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 00:18:35.083540       1 server.go:846] "Version info" version="v1.28.3"
	I1108 00:18:35.083558       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:18:35.091711       1 config.go:188] "Starting service config controller"
	I1108 00:18:35.091975       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 00:18:35.093034       1 config.go:315] "Starting node config controller"
	I1108 00:18:35.096778       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 00:18:35.095676       1 config.go:97] "Starting endpoint slice config controller"
	I1108 00:18:35.096863       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 00:18:35.196909       1 shared_informer.go:318] Caches are synced for node config
	I1108 00:18:35.197005       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 00:18:35.197004       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a] <==
	* W1108 00:18:14.400492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:18:14.400557       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 00:18:15.226880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 00:18:15.227001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 00:18:15.268087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 00:18:15.268290       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1108 00:18:15.306464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:18:15.306552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 00:18:15.346269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1108 00:18:15.346346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1108 00:18:15.483476       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:15.483541       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 00:18:15.515945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 00:18:15.516023       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1108 00:18:15.560036       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:18:15.560088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 00:18:15.580428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 00:18:15.580547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 00:18:15.603384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:15.603463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:15.618618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 00:18:15.618708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1108 00:18:15.645761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:18:15.645831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1108 00:18:18.177253       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 00:12:52 UTC, ends at Wed 2023-11-08 00:27:48 UTC. --
	Nov 08 00:25:10 no-preload-320390 kubelet[4245]: E1108 00:25:10.846876    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:25:17 no-preload-320390 kubelet[4245]: E1108 00:25:17.970838    4245 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:25:17 no-preload-320390 kubelet[4245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:25:17 no-preload-320390 kubelet[4245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:25:17 no-preload-320390 kubelet[4245]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:25:23 no-preload-320390 kubelet[4245]: E1108 00:25:23.846577    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:25:38 no-preload-320390 kubelet[4245]: E1108 00:25:38.846519    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:25:53 no-preload-320390 kubelet[4245]: E1108 00:25:53.847229    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:26:04 no-preload-320390 kubelet[4245]: E1108 00:26:04.846913    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:26:17 no-preload-320390 kubelet[4245]: E1108 00:26:17.971255    4245 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:26:17 no-preload-320390 kubelet[4245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:26:17 no-preload-320390 kubelet[4245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:26:17 no-preload-320390 kubelet[4245]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:26:18 no-preload-320390 kubelet[4245]: E1108 00:26:18.847000    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:26:31 no-preload-320390 kubelet[4245]: E1108 00:26:31.848311    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:26:42 no-preload-320390 kubelet[4245]: E1108 00:26:42.846243    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:26:56 no-preload-320390 kubelet[4245]: E1108 00:26:56.846765    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:27:08 no-preload-320390 kubelet[4245]: E1108 00:27:08.847105    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:27:17 no-preload-320390 kubelet[4245]: E1108 00:27:17.974635    4245 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:27:17 no-preload-320390 kubelet[4245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:27:17 no-preload-320390 kubelet[4245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:27:17 no-preload-320390 kubelet[4245]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:27:19 no-preload-320390 kubelet[4245]: E1108 00:27:19.847862    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:27:30 no-preload-320390 kubelet[4245]: E1108 00:27:30.847009    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:27:43 no-preload-320390 kubelet[4245]: E1108 00:27:43.846757    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	
	* 
	* ==> storage-provisioner [89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729] <==
	* I1108 00:18:34.925963       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 00:18:34.942945       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 00:18:34.943205       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 00:18:34.960405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 00:18:34.960844       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-320390_84f16619-62b7-4b7d-8ec7-b67f9c365c96!
	I1108 00:18:34.964859       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46c1dbd2-a970-4526-bfb8-47404fe8eb3a", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-320390_84f16619-62b7-4b7d-8ec7-b67f9c365c96 became leader
	I1108 00:18:35.062941       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-320390_84f16619-62b7-4b7d-8ec7-b67f9c365c96!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-320390 -n no-preload-320390
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-320390 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-n49bz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-320390 describe pod metrics-server-57f55c9bc5-n49bz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-320390 describe pod metrics-server-57f55c9bc5-n49bz: exit status 1 (65.443141ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-n49bz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-320390 describe pod metrics-server-57f55c9bc5-n49bz: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1108 00:20:38.956993   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1108 00:20:42.434619   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-08 00:28:06.119788095 +0000 UTC m=+5220.283097001
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-039263 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-039263 logs -n 25: (1.584512777s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-161055                           | kubernetes-upgrade-161055    | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:04 UTC |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:05 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-590541        | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-320390             | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-253253            | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-560216 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	|         | disable-driver-mounts-560216                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:09 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-590541             | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320390                  | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-253253                 | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-039263  | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-039263       | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:12 UTC | 08 Nov 23 00:19 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:12:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:12:00.921478   51228 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:12:00.921584   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921592   51228 out.go:309] Setting ErrFile to fd 2...
	I1108 00:12:00.921597   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921752   51228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:12:00.922282   51228 out.go:303] Setting JSON to false
	I1108 00:12:00.923151   51228 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6870,"bootTime":1699395451,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:12:00.923210   51228 start.go:138] virtualization: kvm guest
	I1108 00:12:00.925322   51228 out.go:177] * [default-k8s-diff-port-039263] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:12:00.926718   51228 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:12:00.928030   51228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:12:00.926756   51228 notify.go:220] Checking for updates...
	I1108 00:12:00.930659   51228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:12:00.932049   51228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:12:00.933341   51228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:12:00.934394   51228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:12:00.936334   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:00.936806   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.936857   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.950893   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I1108 00:12:00.951284   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.951775   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.951796   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.952131   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.952308   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:00.952537   51228 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:12:00.952850   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.952894   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.966402   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I1108 00:12:00.966726   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.967218   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.967238   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.967525   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.967705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:01.002079   51228 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:12:01.003352   51228 start.go:298] selected driver: kvm2
	I1108 00:12:01.003362   51228 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:def
ault-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.003471   51228 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:12:01.004117   51228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.004197   51228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:12:01.018635   51228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:12:01.018987   51228 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 00:12:01.019047   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:12:01.019060   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:12:01.019072   51228 start_flags.go:323] config:
	{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.019251   51228 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.021306   51228 out.go:177] * Starting control plane node default-k8s-diff-port-039263 in cluster default-k8s-diff-port-039263
	I1108 00:12:00.865093   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:03.937104   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:01.022723   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:12:01.022765   51228 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1108 00:12:01.022777   51228 cache.go:56] Caching tarball of preloaded images
	I1108 00:12:01.022864   51228 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 00:12:01.022875   51228 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1108 00:12:01.022984   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:12:01.023164   51228 start.go:365] acquiring machines lock for default-k8s-diff-port-039263: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:10.017091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:13.089091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:19.169065   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:22.241084   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:28.321050   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:31.393060   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:37.473056   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:40.475708   50505 start.go:369] acquired machines lock for "no-preload-320390" in 3m26.103068871s
	I1108 00:12:40.475773   50505 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:40.475781   50505 fix.go:54] fixHost starting: 
	I1108 00:12:40.476087   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:40.476116   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:40.490309   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45419
	I1108 00:12:40.490708   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:40.491196   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:12:40.491217   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:40.491530   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:40.491718   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:40.491870   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:12:40.493597   50505 fix.go:102] recreateIfNeeded on no-preload-320390: state=Stopped err=<nil>
	I1108 00:12:40.493628   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	W1108 00:12:40.493762   50505 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:40.495670   50505 out.go:177] * Restarting existing kvm2 VM for "no-preload-320390" ...
	I1108 00:12:40.496930   50505 main.go:141] libmachine: (no-preload-320390) Calling .Start
	I1108 00:12:40.497098   50505 main.go:141] libmachine: (no-preload-320390) Ensuring networks are active...
	I1108 00:12:40.497753   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network default is active
	I1108 00:12:40.498094   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network mk-no-preload-320390 is active
	I1108 00:12:40.498442   50505 main.go:141] libmachine: (no-preload-320390) Getting domain xml...
	I1108 00:12:40.499199   50505 main.go:141] libmachine: (no-preload-320390) Creating domain...
	I1108 00:12:41.718179   50505 main.go:141] libmachine: (no-preload-320390) Waiting to get IP...
	I1108 00:12:41.719024   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.719423   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.719497   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.719407   51373 retry.go:31] will retry after 204.819851ms: waiting for machine to come up
	I1108 00:12:41.925924   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.926414   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.926445   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.926361   51373 retry.go:31] will retry after 237.59613ms: waiting for machine to come up
	I1108 00:12:42.165848   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.166251   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.166282   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.166195   51373 retry.go:31] will retry after 306.914093ms: waiting for machine to come up
	I1108 00:12:42.474651   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.475026   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.475057   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.474981   51373 retry.go:31] will retry after 490.427385ms: waiting for machine to come up
	I1108 00:12:42.967292   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.967709   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.967733   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.967661   51373 retry.go:31] will retry after 684.227655ms: waiting for machine to come up
	I1108 00:12:43.653384   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:43.653823   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:43.653847   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:43.653774   51373 retry.go:31] will retry after 640.101868ms: waiting for machine to come up
	I1108 00:12:40.473798   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:40.473838   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:12:40.475605   50022 machine.go:91] provisioned docker machine in 4m37.566672036s
	I1108 00:12:40.475639   50022 fix.go:56] fixHost completed within 4m37.589859084s
	I1108 00:12:40.475644   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 4m37.589890946s
	W1108 00:12:40.475670   50022 start.go:691] error starting host: provision: host is not running
	W1108 00:12:40.475777   50022 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1108 00:12:40.475788   50022 start.go:706] Will try again in 5 seconds ...
	I1108 00:12:44.295060   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:44.295559   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:44.295610   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:44.295506   51373 retry.go:31] will retry after 797.709386ms: waiting for machine to come up
	I1108 00:12:45.095135   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:45.095552   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:45.095575   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:45.095476   51373 retry.go:31] will retry after 1.052157242s: waiting for machine to come up
	I1108 00:12:46.149040   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:46.149393   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:46.149426   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:46.149336   51373 retry.go:31] will retry after 1.246701556s: waiting for machine to come up
	I1108 00:12:47.397579   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:47.397942   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:47.397981   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:47.397900   51373 retry.go:31] will retry after 1.742754262s: waiting for machine to come up
	I1108 00:12:49.142995   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:49.143390   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:49.143419   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:49.143349   51373 retry.go:31] will retry after 2.412997156s: waiting for machine to come up
	I1108 00:12:45.476072   50022 start.go:365] acquiring machines lock for old-k8s-version-590541: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:51.558471   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:51.558857   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:51.558880   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:51.558809   51373 retry.go:31] will retry after 3.169873944s: waiting for machine to come up
	I1108 00:12:54.732010   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:54.732320   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:54.732340   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:54.732292   51373 retry.go:31] will retry after 3.452837487s: waiting for machine to come up
	I1108 00:12:58.188516   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.188983   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has current primary IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.189014   50505 main.go:141] libmachine: (no-preload-320390) Found IP for machine: 192.168.61.176
	I1108 00:12:58.189036   50505 main.go:141] libmachine: (no-preload-320390) Reserving static IP address...
	I1108 00:12:58.189332   50505 main.go:141] libmachine: (no-preload-320390) Reserved static IP address: 192.168.61.176
	I1108 00:12:58.189364   50505 main.go:141] libmachine: (no-preload-320390) Waiting for SSH to be available...
	I1108 00:12:58.189388   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.189415   50505 main.go:141] libmachine: (no-preload-320390) DBG | skip adding static IP to network mk-no-preload-320390 - found existing host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"}
	I1108 00:12:58.189432   50505 main.go:141] libmachine: (no-preload-320390) DBG | Getting to WaitForSSH function...
	I1108 00:12:58.191264   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191565   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.191598   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191730   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH client type: external
	I1108 00:12:58.191760   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa (-rw-------)
	I1108 00:12:58.191794   50505 main.go:141] libmachine: (no-preload-320390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:12:58.191808   50505 main.go:141] libmachine: (no-preload-320390) DBG | About to run SSH command:
	I1108 00:12:58.191819   50505 main.go:141] libmachine: (no-preload-320390) DBG | exit 0
	I1108 00:12:58.284621   50505 main.go:141] libmachine: (no-preload-320390) DBG | SSH cmd err, output: <nil>: 
	I1108 00:12:58.284983   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetConfigRaw
	I1108 00:12:58.285600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.287966   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288289   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.288325   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288532   50505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/config.json ...
	I1108 00:12:58.288712   50505 machine.go:88] provisioning docker machine ...
	I1108 00:12:58.288732   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:58.288917   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289074   50505 buildroot.go:166] provisioning hostname "no-preload-320390"
	I1108 00:12:58.289097   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289217   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.291053   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291329   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.291358   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291460   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.291613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291749   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291849   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.292009   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.292394   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.292419   50505 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320390 && echo "no-preload-320390" | sudo tee /etc/hostname
	I1108 00:12:58.433310   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320390
	
	I1108 00:12:58.433333   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.435959   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436351   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.436383   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436531   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.436710   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436853   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436959   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.437088   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.437607   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.437633   50505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320390/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:12:58.578473   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:58.578506   50505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:12:58.578568   50505 buildroot.go:174] setting up certificates
	I1108 00:12:58.578582   50505 provision.go:83] configureAuth start
	I1108 00:12:58.578600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.578889   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.581534   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581857   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.581881   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581948   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.583777   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584002   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.584023   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584121   50505 provision.go:138] copyHostCerts
	I1108 00:12:58.584172   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:12:58.584184   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:12:58.584247   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:12:58.584327   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:12:58.584337   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:12:58.584359   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:12:58.584407   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:12:58.584415   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:12:58.584434   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:12:58.584473   50505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-320390 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-320390]
	I1108 00:12:58.785035   50505 provision.go:172] copyRemoteCerts
	I1108 00:12:58.785095   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:12:58.785127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.787683   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788001   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.788037   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788194   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.788363   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.788534   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.788678   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:58.881791   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:12:58.905314   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:12:58.928183   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:12:58.951053   50505 provision.go:86] duration metric: configureAuth took 372.456375ms
	I1108 00:12:58.951079   50505 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:12:58.951288   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:58.951368   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.953851   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954158   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.954182   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954309   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.954504   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954689   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.954964   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.955269   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.955283   50505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:12:59.265311   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:12:59.265342   50505 machine.go:91] provisioned docker machine in 976.618103ms
	I1108 00:12:59.265353   50505 start.go:300] post-start starting for "no-preload-320390" (driver="kvm2")
	I1108 00:12:59.265362   50505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:12:59.265377   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.265683   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:12:59.265721   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.533994   50613 start.go:369] acquired machines lock for "embed-certs-253253" in 3m37.489465904s
	I1108 00:12:59.534047   50613 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:59.534093   50613 fix.go:54] fixHost starting: 
	I1108 00:12:59.534485   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:59.534531   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:59.553784   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I1108 00:12:59.554193   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:59.554676   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:12:59.554702   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:59.555006   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:59.555188   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:12:59.555320   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:12:59.556783   50613 fix.go:102] recreateIfNeeded on embed-certs-253253: state=Stopped err=<nil>
	I1108 00:12:59.556804   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	W1108 00:12:59.556989   50613 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:59.558834   50613 out.go:177] * Restarting existing kvm2 VM for "embed-certs-253253" ...
	I1108 00:12:59.268378   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268792   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.268836   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268991   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.269175   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.269337   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.269480   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.363687   50505 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:12:59.368009   50505 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:12:59.368028   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:12:59.368087   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:12:59.368176   50505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:12:59.368287   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:12:59.377685   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:12:59.399143   50505 start.go:303] post-start completed in 133.780055ms
	I1108 00:12:59.399161   50505 fix.go:56] fixHost completed within 18.923380073s
	I1108 00:12:59.399178   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.401608   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.401977   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.402007   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.402127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.402315   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402471   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402650   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.402824   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:59.403150   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:59.403162   50505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:12:59.533831   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402379.481958632
	
	I1108 00:12:59.533852   50505 fix.go:206] guest clock: 1699402379.481958632
	I1108 00:12:59.533859   50505 fix.go:219] Guest: 2023-11-08 00:12:59.481958632 +0000 UTC Remote: 2023-11-08 00:12:59.399164235 +0000 UTC m=+225.183083525 (delta=82.794397ms)
	I1108 00:12:59.533876   50505 fix.go:190] guest clock delta is within tolerance: 82.794397ms
	I1108 00:12:59.533880   50505 start.go:83] releasing machines lock for "no-preload-320390", held for 19.058127295s
	I1108 00:12:59.533902   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.534171   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:59.537173   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537616   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.537665   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537736   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538230   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538431   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538517   50505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:12:59.538613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.538659   50505 ssh_runner.go:195] Run: cat /version.json
	I1108 00:12:59.538683   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.541051   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541283   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541438   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541463   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541599   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541608   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541634   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541775   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.541845   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541939   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.541997   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.542078   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.542093   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.542265   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.637947   50505 ssh_runner.go:195] Run: systemctl --version
	I1108 00:12:59.660255   50505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:12:59.809407   50505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:12:59.816246   50505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:12:59.816323   50505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:12:59.831564   50505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:12:59.831586   50505 start.go:472] detecting cgroup driver to use...
	I1108 00:12:59.831651   50505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:12:59.847556   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:12:59.861077   50505 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:12:59.861143   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:12:59.876764   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:12:59.890894   50505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:00.001947   50505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:00.121923   50505 docker.go:219] disabling docker service ...
	I1108 00:13:00.122000   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:00.135525   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:00.148130   50505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:00.259318   50505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:00.368101   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:00.381138   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:00.398173   50505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:00.398245   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.407655   50505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:00.407699   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.416919   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.425767   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.434447   50505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:00.443679   50505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:00.451581   50505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:00.451619   50505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:00.464498   50505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:00.474332   50505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:00.599521   50505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:00.770248   50505 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:00.770341   50505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:00.775707   50505 start.go:540] Will wait 60s for crictl version
	I1108 00:13:00.775768   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:00.779578   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:00.821230   50505 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:00.821320   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.872851   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.920420   50505 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:12:59.560111   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Start
	I1108 00:12:59.560287   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring networks are active...
	I1108 00:12:59.561030   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network default is active
	I1108 00:12:59.561390   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network mk-embed-certs-253253 is active
	I1108 00:12:59.561717   50613 main.go:141] libmachine: (embed-certs-253253) Getting domain xml...
	I1108 00:12:59.562287   50613 main.go:141] libmachine: (embed-certs-253253) Creating domain...
	I1108 00:13:00.806061   50613 main.go:141] libmachine: (embed-certs-253253) Waiting to get IP...
	I1108 00:13:00.806862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:00.807268   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:00.807340   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:00.807226   51493 retry.go:31] will retry after 261.179966ms: waiting for machine to come up
	I1108 00:13:01.069535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.070048   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.070078   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.069997   51493 retry.go:31] will retry after 302.795302ms: waiting for machine to come up
	I1108 00:13:01.374567   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.375094   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.375119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.375043   51493 retry.go:31] will retry after 303.804523ms: waiting for machine to come up
	I1108 00:13:01.680374   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.680698   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.680726   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.680660   51493 retry.go:31] will retry after 446.122126ms: waiting for machine to come up
	I1108 00:13:00.921979   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:13:00.924760   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925121   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:13:00.925148   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925370   50505 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:00.929750   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:00.941338   50505 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:00.941372   50505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:00.979343   50505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:00.979370   50505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:13:00.979489   50505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.979539   50505 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.979636   50505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:00.979477   50505 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.979515   50505 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.979516   50505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980609   50505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.980677   50505 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.980704   50505 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.980733   50505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.980949   50505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980994   50505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.126154   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.131334   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.141929   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.150051   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.178472   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.198519   50505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1108 00:13:01.198569   50505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.198628   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.214419   50505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1108 00:13:01.214470   50505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.214527   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249270   50505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1108 00:13:01.249316   50505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.249321   50505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1108 00:13:01.249354   50505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.249363   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249398   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.257971   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1108 00:13:01.268557   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.279207   50505 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1108 00:13:01.279254   50505 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.279255   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.279295   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.279304   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.279365   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.279492   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.477649   50505 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1108 00:13:01.477691   50505 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.477740   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.477782   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1108 00:13:01.477963   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1108 00:13:01.478038   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1108 00:13:01.478005   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.478079   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.478116   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:01.478121   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:01.489810   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.490983   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1108 00:13:01.491011   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.491049   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.490984   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1108 00:13:01.556911   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1108 00:13:01.556996   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1108 00:13:01.557036   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:01.557048   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1108 00:13:01.576123   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1108 00:13:01.576251   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:02.001052   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:02.127888   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.128302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.128333   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.128247   51493 retry.go:31] will retry after 498.0349ms: waiting for machine to come up
	I1108 00:13:02.627872   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.628339   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.628373   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.628296   51493 retry.go:31] will retry after 852.947554ms: waiting for machine to come up
	I1108 00:13:03.483507   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:03.484074   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:03.484119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:03.484024   51493 retry.go:31] will retry after 1.040831469s: waiting for machine to come up
	I1108 00:13:04.526186   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:04.526503   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:04.526535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:04.526446   51493 retry.go:31] will retry after 960.701342ms: waiting for machine to come up
	I1108 00:13:05.489041   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:05.489473   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:05.489509   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:05.489456   51493 retry.go:31] will retry after 1.729813733s: waiting for machine to come up
	I1108 00:13:04.536381   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.045307892s)
	I1108 00:13:04.536412   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1108 00:13:04.536439   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536453   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (2.979392017s)
	I1108 00:13:04.536485   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1108 00:13:04.536491   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536531   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (2.960264305s)
	I1108 00:13:04.536549   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1108 00:13:04.536590   50505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.535505624s)
	I1108 00:13:04.536622   50505 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1108 00:13:04.536652   50505 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:04.536694   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:07.220832   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.68430655s)
	I1108 00:13:07.220863   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1108 00:13:07.220898   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.220902   50505 ssh_runner.go:235] Completed: which crictl: (2.684187653s)
	I1108 00:13:07.220982   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.221015   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:08.593275   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.372272111s)
	I1108 00:13:08.593311   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1108 00:13:08.593326   50505 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.372286228s)
	I1108 00:13:08.593374   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 00:13:08.593338   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:08.593474   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:08.593479   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:07.221541   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:07.221969   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:07.221998   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:07.221953   51493 retry.go:31] will retry after 1.97898588s: waiting for machine to come up
	I1108 00:13:09.202332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:09.202803   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:09.202831   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:09.202756   51493 retry.go:31] will retry after 2.565503631s: waiting for machine to come up
	I1108 00:13:11.769857   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:11.770332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:11.770354   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:11.770292   51493 retry.go:31] will retry after 3.236419831s: waiting for machine to come up
	I1108 00:13:10.382696   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.789194848s)
	I1108 00:13:10.382726   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1108 00:13:10.382747   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.789249445s)
	I1108 00:13:10.382776   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1108 00:13:10.382752   50505 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:10.382828   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:11.846184   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.463326325s)
	I1108 00:13:11.846222   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1108 00:13:11.846254   50505 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:11.846322   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:15.008441   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:15.008899   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:15.008936   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:15.008860   51493 retry.go:31] will retry after 3.079379099s: waiting for machine to come up
	I1108 00:13:19.138865   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.292505697s)
	I1108 00:13:19.138899   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1108 00:13:19.138926   50505 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.138987   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.465800   51228 start.go:369] acquired machines lock for "default-k8s-diff-port-039263" in 1m18.442604828s
	I1108 00:13:19.465853   51228 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:19.465863   51228 fix.go:54] fixHost starting: 
	I1108 00:13:19.466321   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:19.466373   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:19.485614   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I1108 00:13:19.486012   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:19.486457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:13:19.486478   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:19.486839   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:19.487016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:19.487158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:13:19.488697   51228 fix.go:102] recreateIfNeeded on default-k8s-diff-port-039263: state=Stopped err=<nil>
	I1108 00:13:19.488733   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	W1108 00:13:19.488889   51228 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:19.490913   51228 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-039263" ...
	I1108 00:13:19.492333   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Start
	I1108 00:13:19.492481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring networks are active...
	I1108 00:13:19.493162   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network default is active
	I1108 00:13:19.493592   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network mk-default-k8s-diff-port-039263 is active
	I1108 00:13:19.494016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Getting domain xml...
	I1108 00:13:19.494668   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Creating domain...
	I1108 00:13:20.910918   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting to get IP...
	I1108 00:13:20.911948   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912423   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912517   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:20.912403   51635 retry.go:31] will retry after 265.914494ms: waiting for machine to come up
	I1108 00:13:18.092086   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092516   50613 main.go:141] libmachine: (embed-certs-253253) Found IP for machine: 192.168.39.159
	I1108 00:13:18.092544   50613 main.go:141] libmachine: (embed-certs-253253) Reserving static IP address...
	I1108 00:13:18.092568   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has current primary IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092947   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.092980   50613 main.go:141] libmachine: (embed-certs-253253) DBG | skip adding static IP to network mk-embed-certs-253253 - found existing host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"}
	I1108 00:13:18.092999   50613 main.go:141] libmachine: (embed-certs-253253) Reserved static IP address: 192.168.39.159
	I1108 00:13:18.093019   50613 main.go:141] libmachine: (embed-certs-253253) Waiting for SSH to be available...
	I1108 00:13:18.093036   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Getting to WaitForSSH function...
	I1108 00:13:18.094941   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.095311   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095472   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH client type: external
	I1108 00:13:18.095487   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa (-rw-------)
	I1108 00:13:18.095519   50613 main.go:141] libmachine: (embed-certs-253253) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:18.095535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | About to run SSH command:
	I1108 00:13:18.095545   50613 main.go:141] libmachine: (embed-certs-253253) DBG | exit 0
	I1108 00:13:18.184364   50613 main.go:141] libmachine: (embed-certs-253253) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:18.184700   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetConfigRaw
	I1108 00:13:18.264914   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.267404   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267716   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.267752   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267951   50613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/config.json ...
	I1108 00:13:18.268153   50613 machine.go:88] provisioning docker machine ...
	I1108 00:13:18.268171   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:18.268382   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268642   50613 buildroot.go:166] provisioning hostname "embed-certs-253253"
	I1108 00:13:18.268662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268783   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.270977   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271275   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.271302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271485   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.271683   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.271873   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.272021   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.272185   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.272549   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.272564   50613 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-253253 && echo "embed-certs-253253" | sudo tee /etc/hostname
	I1108 00:13:18.408618   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253253
	
	I1108 00:13:18.408655   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.411325   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411629   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.411673   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411793   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.412024   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412204   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412353   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.412513   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.412864   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.412884   50613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-253253' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-253253/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-253253' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:18.537585   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:18.537611   50613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:18.537628   50613 buildroot.go:174] setting up certificates
	I1108 00:13:18.537636   50613 provision.go:83] configureAuth start
	I1108 00:13:18.537644   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.537930   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.540544   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.540937   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.540966   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.541078   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.543184   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543455   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.543486   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543559   50613 provision.go:138] copyHostCerts
	I1108 00:13:18.543621   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:18.543639   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:18.543692   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:18.543793   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:18.543801   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:18.543823   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:18.543876   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:18.543884   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:18.543900   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:18.543962   50613 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-253253 san=[192.168.39.159 192.168.39.159 localhost 127.0.0.1 minikube embed-certs-253253]
	I1108 00:13:18.707824   50613 provision.go:172] copyRemoteCerts
	I1108 00:13:18.707880   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:18.707905   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.710820   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711181   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.711208   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.711642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.711815   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.711973   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:18.803200   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:18.827267   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:13:18.850710   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:18.876752   50613 provision.go:86] duration metric: configureAuth took 339.103407ms
	I1108 00:13:18.876781   50613 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:18.876987   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:18.877075   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.879751   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880121   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.880149   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880331   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.880501   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880649   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880772   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.880929   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.881240   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.881257   50613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:19.199987   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:19.200012   50613 machine.go:91] provisioned docker machine in 931.846262ms
	I1108 00:13:19.200023   50613 start.go:300] post-start starting for "embed-certs-253253" (driver="kvm2")
	I1108 00:13:19.200035   50613 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:19.200057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.200377   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:19.200409   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.203230   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203610   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.203644   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203771   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.203963   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.204118   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.204231   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.297991   50613 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:19.303630   50613 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:19.303655   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:19.303721   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:19.303831   50613 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:19.303956   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:19.315605   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:19.339647   50613 start.go:303] post-start completed in 139.611237ms
	I1108 00:13:19.339665   50613 fix.go:56] fixHost completed within 19.805611247s
	I1108 00:13:19.339687   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.342291   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342623   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.342648   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342838   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.343019   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343147   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343323   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.343483   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:19.343856   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:19.343868   50613 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:19.465645   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402399.415738784
	
	I1108 00:13:19.465670   50613 fix.go:206] guest clock: 1699402399.415738784
	I1108 00:13:19.465681   50613 fix.go:219] Guest: 2023-11-08 00:13:19.415738784 +0000 UTC Remote: 2023-11-08 00:13:19.339668655 +0000 UTC m=+237.442917453 (delta=76.070129ms)
	I1108 00:13:19.465704   50613 fix.go:190] guest clock delta is within tolerance: 76.070129ms
	I1108 00:13:19.465710   50613 start.go:83] releasing machines lock for "embed-certs-253253", held for 19.931686858s
	I1108 00:13:19.465738   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.465996   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:19.468862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469185   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.469223   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469365   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.469898   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470091   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470174   50613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:19.470215   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.470300   50613 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:19.470321   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.473140   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473517   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473562   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473594   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473612   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473777   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473843   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474004   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474007   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474153   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.474192   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474344   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.565638   50613 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:19.591686   50613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:19.747192   50613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:19.755053   50613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:19.755134   50613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:19.774522   50613 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:19.774551   50613 start.go:472] detecting cgroup driver to use...
	I1108 00:13:19.774652   50613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:19.795492   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:19.809888   50613 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:19.809958   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:19.823108   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:19.835588   50613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:19.940017   50613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:20.075405   50613 docker.go:219] disabling docker service ...
	I1108 00:13:20.075460   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:20.090949   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:20.103551   50613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:20.226887   50613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:20.352088   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:20.367626   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:20.388084   50613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:20.388153   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.398506   50613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:20.398573   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.408335   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.417991   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.427599   50613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:20.439537   50613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:20.450914   50613 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:20.450972   50613 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:20.464456   50613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:20.475133   50613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:20.586162   50613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:20.799540   50613 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:20.799615   50613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:20.808503   50613 start.go:540] Will wait 60s for crictl version
	I1108 00:13:20.808551   50613 ssh_runner.go:195] Run: which crictl
	I1108 00:13:20.812371   50613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:20.853073   50613 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:20.853166   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.904737   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.958281   50613 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:20.959792   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:20.962399   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.962740   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:20.962775   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.963037   50613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:20.967403   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.980199   50613 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:20.980261   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:21.024679   50613 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:21.024757   50613 ssh_runner.go:195] Run: which lz4
	I1108 00:13:21.028861   50613 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:21.032736   50613 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:21.032762   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:19.898602   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1108 00:13:19.898655   50505 cache_images.go:123] Successfully loaded all cached images
	I1108 00:13:19.898663   50505 cache_images.go:92] LoadImages completed in 18.919280882s
	I1108 00:13:19.898742   50505 ssh_runner.go:195] Run: crio config
	I1108 00:13:19.970909   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:19.970936   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:19.970958   50505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:19.970986   50505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320390 NodeName:no-preload-320390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:19.971171   50505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320390"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:19.971273   50505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-320390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:19.971347   50505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:19.984469   50505 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:19.984551   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:19.995491   50505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1108 00:13:20.013609   50505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:20.031507   50505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1108 00:13:20.051978   50505 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:20.057139   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.071438   50505 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390 for IP: 192.168.61.176
	I1108 00:13:20.071471   50505 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:20.071635   50505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:20.071691   50505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:20.071782   50505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.key
	I1108 00:13:20.071848   50505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key.492ad1cf
	I1108 00:13:20.071899   50505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key
	I1108 00:13:20.072026   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:20.072064   50505 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:20.072080   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:20.072130   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:20.072167   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:20.072205   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:20.072260   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:20.073092   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:20.099422   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:20.126257   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:20.153126   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:20.184849   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:20.215515   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:20.247686   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:20.277712   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:20.304438   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:20.330321   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:20.361411   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:20.390456   50505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:20.410634   50505 ssh_runner.go:195] Run: openssl version
	I1108 00:13:20.418597   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:20.431853   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438127   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438271   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.445644   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:20.456959   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:20.466413   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472311   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472365   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.477965   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:20.487454   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:20.496731   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502531   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502591   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.509683   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:20.520960   50505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:20.525545   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:20.531367   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:20.537422   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:20.543607   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:20.548942   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:20.554419   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:20.559633   50505 kubeadm.go:404] StartCluster: {Name:no-preload-320390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:20.559719   50505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:20.559766   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:20.603718   50505 cri.go:89] found id: ""
	I1108 00:13:20.603795   50505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:20.613389   50505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:20.613418   50505 kubeadm.go:636] restartCluster start
	I1108 00:13:20.613476   50505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:20.622276   50505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.623645   50505 kubeconfig.go:92] found "no-preload-320390" server: "https://192.168.61.176:8443"
	I1108 00:13:20.626874   50505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:20.638188   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.638238   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.649536   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.649553   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.649610   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.660145   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.160858   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.160936   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.174163   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.660441   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.660526   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.675795   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.160281   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.160358   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.175777   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.660249   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.660328   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.675747   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.160280   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.160360   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.174686   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.661260   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.661343   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.675936   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.160440   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.160558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.174501   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.180066   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180534   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.180492   51635 retry.go:31] will retry after 320.996627ms: waiting for machine to come up
	I1108 00:13:21.503202   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503750   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.503689   51635 retry.go:31] will retry after 431.944242ms: waiting for machine to come up
	I1108 00:13:21.937564   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938025   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.937972   51635 retry.go:31] will retry after 592.354358ms: waiting for machine to come up
	I1108 00:13:22.531850   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:22.532272   51635 retry.go:31] will retry after 589.753727ms: waiting for machine to come up
	I1108 00:13:23.124275   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124784   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.124746   51635 retry.go:31] will retry after 596.910282ms: waiting for machine to come up
	I1108 00:13:23.722967   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723389   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723419   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.723349   51635 retry.go:31] will retry after 793.320391ms: waiting for machine to come up
	I1108 00:13:24.518525   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518953   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518985   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:24.518914   51635 retry.go:31] will retry after 1.247294281s: waiting for machine to come up
	I1108 00:13:25.768137   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768634   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:25.768541   51635 retry.go:31] will retry after 1.468389149s: waiting for machine to come up
	I1108 00:13:22.802292   50613 crio.go:444] Took 1.773480 seconds to copy over tarball
	I1108 00:13:22.802374   50613 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:25.811996   50613 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009592787s)
	I1108 00:13:25.812027   50613 crio.go:451] Took 3.009706 seconds to extract the tarball
	I1108 00:13:25.812036   50613 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:25.852011   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:25.903032   50613 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:25.903055   50613 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:25.903160   50613 ssh_runner.go:195] Run: crio config
	I1108 00:13:25.964562   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:25.964585   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:25.964601   50613 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:25.964618   50613 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-253253 NodeName:embed-certs-253253 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:25.964768   50613 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-253253"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:25.964869   50613 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-253253 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:25.964931   50613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:25.973956   50613 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:25.974031   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:25.982070   50613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 00:13:26.001066   50613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:26.020258   50613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1108 00:13:26.039418   50613 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:26.043133   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:26.055865   50613 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253 for IP: 192.168.39.159
	I1108 00:13:26.055902   50613 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:26.056069   50613 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:26.056268   50613 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:26.056374   50613 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/client.key
	I1108 00:13:26.128533   50613 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key.b15c5797
	I1108 00:13:26.128666   50613 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key
	I1108 00:13:26.128842   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:26.128884   50613 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:26.128895   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:26.128930   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:26.128953   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:26.128975   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:26.129016   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:26.129621   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:26.153776   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:26.179006   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:26.202199   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:26.225241   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:26.247745   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:26.270546   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:26.297075   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:26.320835   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:26.344068   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:26.367085   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:26.391491   50613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:26.408055   50613 ssh_runner.go:195] Run: openssl version
	I1108 00:13:26.413824   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:26.423666   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428281   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428332   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.433901   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:26.443832   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:26.453722   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458290   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458341   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.464035   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:26.473908   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:26.483600   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488053   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488113   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.493571   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:26.503466   50613 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:26.508047   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:26.514165   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:26.520278   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:26.526421   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:26.532388   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:26.538323   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:26.544215   50613 kubeadm.go:404] StartCluster: {Name:embed-certs-253253 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:26.544287   50613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:26.544330   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:26.586501   50613 cri.go:89] found id: ""
	I1108 00:13:26.586578   50613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:26.596647   50613 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:26.596676   50613 kubeadm.go:636] restartCluster start
	I1108 00:13:26.596734   50613 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:26.605901   50613 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.607305   50613 kubeconfig.go:92] found "embed-certs-253253" server: "https://192.168.39.159:8443"
	I1108 00:13:26.610434   50613 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:26.619238   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.619291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.630724   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.630746   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.630787   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.641997   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.660263   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.660349   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.675197   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.160678   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.160774   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.172593   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.660613   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.660696   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.672242   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.160884   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.160978   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.174734   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.660269   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.660337   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.671721   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.160250   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.160344   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.171104   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.660667   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.660729   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.671899   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.160408   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.160471   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.170733   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.660264   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.660338   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.671482   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.161084   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.161163   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.172174   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.238049   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238487   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238518   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:27.238428   51635 retry.go:31] will retry after 1.602246301s: waiting for machine to come up
	I1108 00:13:28.842785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843235   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843259   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:28.843188   51635 retry.go:31] will retry after 2.218327688s: waiting for machine to come up
	I1108 00:13:27.142567   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.242647   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.256767   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.642212   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.642306   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.654185   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.142751   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.142832   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.154141   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.642738   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.642817   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.654476   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.143085   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.143168   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.154553   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.642422   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.642499   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.658048   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.142497   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.142568   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.153710   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.642216   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.642291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.658036   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.142547   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.142634   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.159124   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.642720   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.642810   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.654593   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.660882   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.660944   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.675528   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.161058   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.161121   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.171493   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.638722   50505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:30.638762   50505 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:30.638776   50505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:30.638825   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:30.677982   50505 cri.go:89] found id: ""
	I1108 00:13:30.678064   50505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:30.693650   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:30.702679   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:30.702757   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711179   50505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711212   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:30.843638   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:31.970868   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.127188218s)
	I1108 00:13:31.970904   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.167903   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.242076   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.324914   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:32.325001   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.342576   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.861296   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.360958   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.861308   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:31.062973   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063465   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:31.063370   51635 retry.go:31] will retry after 2.935881965s: waiting for machine to come up
	I1108 00:13:34.002009   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002456   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:34.002385   51635 retry.go:31] will retry after 2.918632194s: waiting for machine to come up
	I1108 00:13:32.142573   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.142668   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.156513   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:32.643129   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.643203   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.654790   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.143023   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.143114   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.159475   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.642631   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.642728   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.658632   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.142142   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.142218   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.158375   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.642356   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.642437   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.657692   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.142180   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.142276   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.157616   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.642121   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.642194   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.656642   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.142162   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:36.142270   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:36.153340   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.619909   50613 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:36.619941   50613 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:36.619958   50613 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:36.620035   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:36.656935   50613 cri.go:89] found id: ""
	I1108 00:13:36.657008   50613 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:36.671784   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:36.680073   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:36.680120   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688560   50613 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688575   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:36.802484   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:34.361558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.860720   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.881793   50505 api_server.go:72] duration metric: took 2.55688905s to wait for apiserver process to appear ...
	I1108 00:13:34.881823   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:34.881843   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.396447   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.396488   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.396503   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.471135   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.471165   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.971845   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.977126   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:38.977163   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.472030   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.477778   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:39.477810   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.971333   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.977224   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:13:39.987415   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:39.987446   50505 api_server.go:131] duration metric: took 5.10561478s to wait for apiserver health ...
	I1108 00:13:39.987456   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:39.987465   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:39.989270   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:36.922427   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922874   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922916   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:36.922824   51635 retry.go:31] will retry after 3.960656744s: waiting for machine to come up
	I1108 00:13:40.886022   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Found IP for machine: 192.168.72.116
	I1108 00:13:40.886591   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has current primary IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886601   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserving static IP address...
	I1108 00:13:40.886974   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.887012   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | skip adding static IP to network mk-default-k8s-diff-port-039263 - found existing host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"}
	I1108 00:13:40.887037   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Getting to WaitForSSH function...
	I1108 00:13:40.887058   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserved static IP address: 192.168.72.116
	I1108 00:13:40.887072   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for SSH to be available...
	I1108 00:13:40.889373   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889771   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.889803   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889991   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH client type: external
	I1108 00:13:40.890018   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa (-rw-------)
	I1108 00:13:40.890054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:40.890068   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | About to run SSH command:
	I1108 00:13:40.890082   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | exit 0
	I1108 00:13:37.573684   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.781978   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.863424   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.935306   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:37.935377   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:37.947059   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.458806   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.959076   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.459045   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.959244   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.458249   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.480623   50613 api_server.go:72] duration metric: took 2.545315304s to wait for apiserver process to appear ...
	I1108 00:13:40.480650   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:40.480668   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:42.285976   50022 start.go:369] acquired machines lock for "old-k8s-version-590541" in 56.809842177s
	I1108 00:13:42.286028   50022 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:42.286039   50022 fix.go:54] fixHost starting: 
	I1108 00:13:42.286455   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:42.286492   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:42.305869   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I1108 00:13:42.306363   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:42.306845   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:13:42.306871   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:42.307221   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:42.307548   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:13:42.307740   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:13:42.309513   50022 fix.go:102] recreateIfNeeded on old-k8s-version-590541: state=Stopped err=<nil>
	I1108 00:13:42.309539   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	W1108 00:13:42.309706   50022 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:42.311819   50022 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-590541" ...
	I1108 00:13:40.997357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:40.997688   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetConfigRaw
	I1108 00:13:40.998394   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.001148   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001578   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.001612   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001940   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:13:41.002174   51228 machine.go:88] provisioning docker machine ...
	I1108 00:13:41.002197   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:41.002421   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002577   51228 buildroot.go:166] provisioning hostname "default-k8s-diff-port-039263"
	I1108 00:13:41.002600   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002800   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.005167   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005544   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.005584   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005873   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.006029   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006291   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.006425   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.007012   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.007036   51228 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-039263 && echo "default-k8s-diff-port-039263" | sudo tee /etc/hostname
	I1108 00:13:41.168664   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039263
	
	I1108 00:13:41.168698   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.171709   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172090   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.172132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172266   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.172457   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172650   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172867   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.173130   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.173626   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.173654   51228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-039263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-039263/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-039263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:41.324510   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:41.324539   51228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:41.324586   51228 buildroot.go:174] setting up certificates
	I1108 00:13:41.324598   51228 provision.go:83] configureAuth start
	I1108 00:13:41.324610   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.324933   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.327797   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.328213   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.330558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.330921   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.330955   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.331062   51228 provision.go:138] copyHostCerts
	I1108 00:13:41.331128   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:41.331150   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:41.331222   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:41.331337   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:41.331355   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:41.331387   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:41.331467   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:41.331479   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:41.331506   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:41.331592   51228 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-039263 san=[192.168.72.116 192.168.72.116 localhost 127.0.0.1 minikube default-k8s-diff-port-039263]
	I1108 00:13:41.452051   51228 provision.go:172] copyRemoteCerts
	I1108 00:13:41.452123   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:41.452156   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.454755   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455056   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.455089   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455288   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.455512   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.455704   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.455831   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:41.554387   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:41.586357   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:41.616703   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1108 00:13:41.646461   51228 provision.go:86] duration metric: configureAuth took 321.850044ms
	I1108 00:13:41.646489   51228 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:41.646730   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:41.646825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.650386   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.650813   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.650856   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.651031   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.651232   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651422   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.651763   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.652302   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.652325   51228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:42.006373   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:42.006401   51228 machine.go:91] provisioned docker machine in 1.004212938s
	I1108 00:13:42.006414   51228 start.go:300] post-start starting for "default-k8s-diff-port-039263" (driver="kvm2")
	I1108 00:13:42.006426   51228 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:42.006445   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.006785   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:42.006811   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.009619   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.009950   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.009986   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.010127   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.010344   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.010515   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.010673   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.106366   51228 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:42.110676   51228 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:42.110701   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:42.110770   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:42.110869   51228 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:42.110972   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:42.121223   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:42.146966   51228 start.go:303] post-start completed in 140.536976ms
	I1108 00:13:42.146996   51228 fix.go:56] fixHost completed within 22.681133015s
	I1108 00:13:42.147019   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.149705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.150165   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150406   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.150606   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150818   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150988   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.151156   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:42.151511   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:42.151523   51228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:42.285789   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402422.233004693
	
	I1108 00:13:42.285815   51228 fix.go:206] guest clock: 1699402422.233004693
	I1108 00:13:42.285823   51228 fix.go:219] Guest: 2023-11-08 00:13:42.233004693 +0000 UTC Remote: 2023-11-08 00:13:42.146999966 +0000 UTC m=+101.273648910 (delta=86.004727ms)
	I1108 00:13:42.285869   51228 fix.go:190] guest clock delta is within tolerance: 86.004727ms
	I1108 00:13:42.285877   51228 start.go:83] releasing machines lock for "default-k8s-diff-port-039263", held for 22.820045752s
	I1108 00:13:42.285913   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.286161   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:42.288711   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289095   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.289133   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289241   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.289864   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290109   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290209   51228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:42.290261   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.290323   51228 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:42.290345   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.293063   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293219   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293451   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293483   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293570   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293599   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.293878   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.293887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.294075   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.294085   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294234   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294280   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.294336   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.386493   51228 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:42.411009   51228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:42.558200   51228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:42.566040   51228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:42.566116   51228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:42.584775   51228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:42.584800   51228 start.go:472] detecting cgroup driver to use...
	I1108 00:13:42.584872   51228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:42.598720   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:42.612836   51228 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:42.612927   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:42.627474   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:42.641670   51228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:42.753616   51228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:42.888608   51228 docker.go:219] disabling docker service ...
	I1108 00:13:42.888680   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:42.903298   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:42.920184   51228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:43.054621   51228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:43.181836   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:43.198481   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:43.219759   51228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:43.219827   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.231137   51228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:43.231221   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.242206   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.253506   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.264311   51228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:43.276451   51228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:43.288448   51228 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:43.288522   51228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:43.305986   51228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:43.318366   51228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:43.479739   51228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:43.705223   51228 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:43.705302   51228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:43.711842   51228 start.go:540] Will wait 60s for crictl version
	I1108 00:13:43.711915   51228 ssh_runner.go:195] Run: which crictl
	I1108 00:13:43.717688   51228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:43.762492   51228 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:43.762651   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.814548   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.870144   51228 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:39.990811   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:40.020162   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:40.064758   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:40.081652   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:13:40.081705   50505 system_pods.go:61] "coredns-5dd5756b68-lhnz5" [936252ee-4f00-49e2-96e4-7c4f4a4ca378] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:40.081725   50505 system_pods.go:61] "etcd-no-preload-320390" [95e08672-dc80-4aa6-bd4a-e5f77bfc4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:40.081738   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [3261561e-b7d5-4302-8e0b-301d00407e8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:40.081748   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [b87602fd-b248-4529-9116-1851a4284bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:40.081763   50505 system_pods.go:61] "kube-proxy-c4mbm" [33806b69-57c0-4807-849b-b6a4f8a5db12] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:40.081777   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [4f7b4160-b99e-4f76-9b12-c5b1849c91b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:40.081791   50505 system_pods.go:61] "metrics-server-57f55c9bc5-th89c" [06aea7c0-065b-44a4-8d53-432f5722e937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:40.081810   50505 system_pods.go:61] "storage-provisioner" [c7b0810b-1ba7-4d56-ad97-3f04d771960d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:40.081823   50505 system_pods.go:74] duration metric: took 17.024016ms to wait for pod list to return data ...
	I1108 00:13:40.081836   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:40.093789   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:40.093827   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:40.093841   50505 node_conditions.go:105] duration metric: took 11.998569ms to run NodePressure ...
	I1108 00:13:40.093863   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:40.340962   50505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346004   50505 kubeadm.go:787] kubelet initialised
	I1108 00:13:40.346032   50505 kubeadm.go:788] duration metric: took 5.042344ms waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346044   50505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:40.355648   50505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:42.377985   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:42.313355   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Start
	I1108 00:13:42.313526   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring networks are active...
	I1108 00:13:42.314176   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network default is active
	I1108 00:13:42.314638   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network mk-old-k8s-version-590541 is active
	I1108 00:13:42.315060   50022 main.go:141] libmachine: (old-k8s-version-590541) Getting domain xml...
	I1108 00:13:42.315833   50022 main.go:141] libmachine: (old-k8s-version-590541) Creating domain...
	I1108 00:13:43.739499   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting to get IP...
	I1108 00:13:43.740647   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.741195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.741259   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.741155   51822 retry.go:31] will retry after 195.621332ms: waiting for machine to come up
	I1108 00:13:43.938557   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.939127   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.939268   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.939200   51822 retry.go:31] will retry after 278.651736ms: waiting for machine to come up
	I1108 00:13:44.219831   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.220473   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.220500   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.220418   51822 retry.go:31] will retry after 384.748872ms: waiting for machine to come up
	I1108 00:13:44.607110   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.607665   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.607696   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.607591   51822 retry.go:31] will retry after 401.60668ms: waiting for machine to come up
	I1108 00:13:43.871596   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:43.874814   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875307   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:43.875357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875575   51228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:43.880324   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:43.895271   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:43.895331   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:43.943120   51228 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:43.943238   51228 ssh_runner.go:195] Run: which lz4
	I1108 00:13:43.947723   51228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:43.952328   51228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:43.952365   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:45.857547   51228 crio.go:444] Took 1.909852 seconds to copy over tarball
	I1108 00:13:45.857623   51228 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:45.314087   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.314125   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.314144   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.333352   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.333384   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.833959   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.852530   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:45.852613   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.333996   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.346680   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:46.346714   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.833955   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.841287   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:13:46.853271   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:46.853299   50613 api_server.go:131] duration metric: took 6.372641273s to wait for apiserver health ...
	I1108 00:13:46.853310   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:46.853318   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:46.855336   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:46.856955   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:46.892049   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:46.933039   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:44.399678   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.879110   50505 pod_ready.go:92] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.879142   50505 pod_ready.go:81] duration metric: took 5.523463579s waiting for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.879154   50505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885356   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.885377   50505 pod_ready.go:81] duration metric: took 6.21581ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885385   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:47.914308   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.011074   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.011525   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.011560   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.011500   51822 retry.go:31] will retry after 708.154492ms: waiting for machine to come up
	I1108 00:13:45.720911   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.721383   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.721418   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.721294   51822 retry.go:31] will retry after 746.365542ms: waiting for machine to come up
	I1108 00:13:46.469031   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:46.469615   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:46.469641   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:46.469556   51822 retry.go:31] will retry after 924.305758ms: waiting for machine to come up
	I1108 00:13:47.395756   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:47.396297   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:47.396323   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:47.396241   51822 retry.go:31] will retry after 1.343866256s: waiting for machine to come up
	I1108 00:13:48.741427   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:48.741851   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:48.741883   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:48.741816   51822 retry.go:31] will retry after 1.388849147s: waiting for machine to come up
	I1108 00:13:49.625178   51228 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.76753046s)
	I1108 00:13:49.625229   51228 crio.go:451] Took 3.767633 seconds to extract the tarball
	I1108 00:13:49.625242   51228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:49.670263   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:49.727650   51228 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:49.727677   51228 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:49.727747   51228 ssh_runner.go:195] Run: crio config
	I1108 00:13:49.811565   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:13:49.811592   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:49.811615   51228 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:49.811639   51228 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.116 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-039263 NodeName:default-k8s-diff-port-039263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:49.811812   51228 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-039263"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:49.811906   51228 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-039263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1108 00:13:49.811984   51228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:49.822961   51228 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:49.823027   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:49.832632   51228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1108 00:13:49.850812   51228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:49.869345   51228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1108 00:13:49.887645   51228 ssh_runner.go:195] Run: grep 192.168.72.116	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:49.892538   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:49.907166   51228 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263 for IP: 192.168.72.116
	I1108 00:13:49.907205   51228 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:49.907374   51228 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:49.907425   51228 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:49.907523   51228 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.key
	I1108 00:13:49.907601   51228 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key.b2cbdf93
	I1108 00:13:49.907658   51228 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key
	I1108 00:13:49.907807   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:49.907851   51228 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:49.907872   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:49.907915   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:49.907951   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:49.907988   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:49.908046   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:49.908955   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:49.938941   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:49.964654   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:49.991354   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:50.018895   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:50.048330   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:50.076095   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:50.103752   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:50.130140   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:50.156862   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:50.181994   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:50.208069   51228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:50.226069   51228 ssh_runner.go:195] Run: openssl version
	I1108 00:13:50.232941   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:50.246981   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.252981   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.253059   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.260626   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:50.274135   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:50.285611   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290761   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290837   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.297508   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:50.308772   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:50.320122   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326021   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326083   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.332534   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:50.344381   51228 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:50.350040   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:50.356282   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:50.362850   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:50.378237   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:50.385607   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:50.392272   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:50.399220   51228 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port
-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:50.399304   51228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:50.399358   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:50.449693   51228 cri.go:89] found id: ""
	I1108 00:13:50.449770   51228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:50.460225   51228 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:50.460256   51228 kubeadm.go:636] restartCluster start
	I1108 00:13:50.460313   51228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:50.469777   51228 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.470973   51228 kubeconfig.go:92] found "default-k8s-diff-port-039263" server: "https://192.168.72.116:8444"
	I1108 00:13:50.473778   51228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:50.482964   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.483022   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.495100   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.495123   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.495186   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.508735   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:46.949012   50613 system_pods.go:59] 9 kube-system pods found
	I1108 00:13:46.950252   50613 system_pods.go:61] "coredns-5dd5756b68-7djdr" [a1459bf3-703b-418a-bc22-c98e285c6e31] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950302   50613 system_pods.go:61] "coredns-5dd5756b68-8qjbd" [fa7b05fd-725b-4c9c-815e-360f2bef8ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950336   50613 system_pods.go:61] "etcd-embed-certs-253253" [2631ed7d-3af4-4848-bbb8-c77038f8a1f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:46.950369   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [80b3e8da-6474-4fd8-bb86-0d9cc70086ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:46.950391   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [ee19def3-043a-4832-8153-52aaf8b4748a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:46.950407   50613 system_pods.go:61] "kube-proxy-rsgkf" [509d66e3-b034-4dcd-a16e-b2f93b9efa6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:46.950482   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [ef7bb9c3-98c8-45d8-8f54-852fb639b408] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:46.950497   50613 system_pods.go:61] "metrics-server-57f55c9bc5-s7ldx" [61cd423c-edbd-4d0c-87e8-1ac8e52c70e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:46.950507   50613 system_pods.go:61] "storage-provisioner" [d6157b7c-6b52-4ca8-a935-d68a0291305f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:46.950519   50613 system_pods.go:74] duration metric: took 17.457991ms to wait for pod list to return data ...
	I1108 00:13:46.950532   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:46.956062   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:46.956142   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:46.956165   50613 node_conditions.go:105] duration metric: took 5.622732ms to run NodePressure ...
	I1108 00:13:46.956193   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:47.272695   50613 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280001   50613 kubeadm.go:787] kubelet initialised
	I1108 00:13:47.280031   50613 kubeadm.go:788] duration metric: took 7.30064ms waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280041   50613 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:47.290043   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.378703   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:50.370740   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:51.912802   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.912845   50505 pod_ready.go:81] duration metric: took 6.027451924s waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.912861   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920043   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.920073   50505 pod_ready.go:81] duration metric: took 7.195906ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920085   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927863   50505 pod_ready.go:92] pod "kube-proxy-c4mbm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.927887   50505 pod_ready.go:81] duration metric: took 7.793258ms waiting for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927900   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934444   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.934470   50505 pod_ready.go:81] duration metric: took 6.560509ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934481   50505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.131947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:50.132491   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:50.132526   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:50.132397   51822 retry.go:31] will retry after 1.410573405s: waiting for machine to come up
	I1108 00:13:51.544674   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:51.545073   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:51.545099   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:51.545025   51822 retry.go:31] will retry after 1.773802671s: waiting for machine to come up
	I1108 00:13:53.320381   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:53.320863   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:53.320893   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:53.320805   51822 retry.go:31] will retry after 3.166868207s: waiting for machine to come up
	I1108 00:13:51.009734   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.009825   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.026052   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:51.509697   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.509786   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.527840   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.009557   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.009656   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.025049   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.509606   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.509707   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.526174   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.008803   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.008954   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.022472   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.508900   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.509005   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.525225   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.009884   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.009974   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.022171   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.509280   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.509376   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.522041   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.009670   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.009752   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.023035   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.509640   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.509717   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.526730   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.836317   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:53.332094   50613 pod_ready.go:92] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.332121   50613 pod_ready.go:81] duration metric: took 6.042047013s waiting for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.332133   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337858   50613 pod_ready.go:92] pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.337882   50613 pod_ready.go:81] duration metric: took 5.740229ms waiting for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337894   50613 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:55.356131   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:54.323357   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.328874   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.820773   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.490058   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:56.490553   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:56.490590   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:56.490511   51822 retry.go:31] will retry after 3.18441493s: waiting for machine to come up
	I1108 00:13:56.009549   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.009646   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.024559   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:56.508912   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.509015   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.521861   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.009408   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.009479   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.022156   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.509466   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.509554   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.522766   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.008909   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.009026   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.021521   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.509050   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.509134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.521387   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.008889   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.008975   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.021781   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.509489   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.509575   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.521581   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.009117   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:14:00.009196   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:00.022210   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.483934   51228 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:00.483990   51228 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:00.484004   51228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:00.484066   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:00.528120   51228 cri.go:89] found id: ""
	I1108 00:14:00.528178   51228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:00.544876   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:00.553827   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:00.553883   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562695   51228 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562721   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:00.676044   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:57.856242   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.855444   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.855471   50613 pod_ready.go:81] duration metric: took 5.517568786s waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.855479   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860431   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.860453   50613 pod_ready.go:81] duration metric: took 4.966273ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860464   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865854   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.865874   50613 pod_ready.go:81] duration metric: took 5.40177ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865914   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870805   50613 pod_ready.go:92] pod "kube-proxy-rsgkf" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.870826   50613 pod_ready.go:81] duration metric: took 4.898411ms waiting for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870835   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958009   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.958034   50613 pod_ready.go:81] duration metric: took 87.190501ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958052   50613 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:01.265674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:00.823696   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:03.322129   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:59.678086   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:59.678579   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:59.678598   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:59.678528   51822 retry.go:31] will retry after 4.30352873s: waiting for machine to come up
	I1108 00:14:03.983994   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984437   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has current primary IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984474   50022 main.go:141] libmachine: (old-k8s-version-590541) Found IP for machine: 192.168.50.49
	I1108 00:14:03.984489   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserving static IP address...
	I1108 00:14:03.984947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.984981   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | skip adding static IP to network mk-old-k8s-version-590541 - found existing host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"}
	I1108 00:14:03.985000   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserved static IP address: 192.168.50.49
	I1108 00:14:03.985020   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting for SSH to be available...
	I1108 00:14:03.985034   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Getting to WaitForSSH function...
	I1108 00:14:03.987671   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988083   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.988116   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988388   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH client type: external
	I1108 00:14:03.988424   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa (-rw-------)
	I1108 00:14:03.988461   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:14:03.988481   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | About to run SSH command:
	I1108 00:14:03.988496   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | exit 0
	I1108 00:14:04.080867   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | SSH cmd err, output: <nil>: 
	I1108 00:14:04.081275   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetConfigRaw
	I1108 00:14:04.081955   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.085061   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085512   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.085554   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085942   50022 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/config.json ...
	I1108 00:14:04.086165   50022 machine.go:88] provisioning docker machine ...
	I1108 00:14:04.086188   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:04.086417   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086612   50022 buildroot.go:166] provisioning hostname "old-k8s-version-590541"
	I1108 00:14:04.086634   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086822   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.089431   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.089808   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.089838   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.090007   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.090201   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090362   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090535   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.090686   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.090991   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.091002   50022 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-590541 && echo "old-k8s-version-590541" | sudo tee /etc/hostname
	I1108 00:14:04.228526   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-590541
	
	I1108 00:14:04.228561   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.232020   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232390   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.232454   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232743   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.232930   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233109   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233264   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.233430   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.233786   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.233812   50022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-590541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-590541/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-590541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:14:04.370396   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:14:04.370424   50022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:14:04.370469   50022 buildroot.go:174] setting up certificates
	I1108 00:14:04.370487   50022 provision.go:83] configureAuth start
	I1108 00:14:04.370505   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.370779   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.373683   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374081   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.374111   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374240   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.377048   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377441   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.377469   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377596   50022 provision.go:138] copyHostCerts
	I1108 00:14:04.377658   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:14:04.377678   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:14:04.377748   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:14:04.377855   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:14:04.377867   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:14:04.377893   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:14:04.377965   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:14:04.377979   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:14:04.378005   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:14:04.378064   50022 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-590541 san=[192.168.50.49 192.168.50.49 localhost 127.0.0.1 minikube old-k8s-version-590541]
	I1108 00:14:04.534682   50022 provision.go:172] copyRemoteCerts
	I1108 00:14:04.534750   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:14:04.534778   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.538002   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538379   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.538408   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538639   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.538789   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.538975   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.539146   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:04.632308   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:14:01.961492   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.285410864s)
	I1108 00:14:01.961529   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.165604   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.235655   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.352126   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:02.352212   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.370538   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.884696   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.384139   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.884529   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.384134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.884877   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.913244   51228 api_server.go:72] duration metric: took 2.56112461s to wait for apiserver process to appear ...
	I1108 00:14:04.913273   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:04.913295   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:04.657542   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:14:04.682815   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:14:04.709405   50022 provision.go:86] duration metric: configureAuth took 338.902281ms
	I1108 00:14:04.709439   50022 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:14:04.709651   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:14:04.709741   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.713141   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713520   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.713561   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713718   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.713923   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714108   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714259   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.714497   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.714885   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.714905   50022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:14:05.055346   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:14:05.055427   50022 machine.go:91] provisioned docker machine in 969.247821ms
	I1108 00:14:05.055446   50022 start.go:300] post-start starting for "old-k8s-version-590541" (driver="kvm2")
	I1108 00:14:05.055459   50022 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:14:05.055493   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.055841   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:14:05.055895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.058959   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059423   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.059457   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059601   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.059775   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.059895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.060042   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.151543   50022 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:14:05.155876   50022 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:14:05.155902   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:14:05.155969   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:14:05.156056   50022 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:14:05.156229   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:14:05.165742   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:05.190622   50022 start.go:303] post-start completed in 135.159333ms
	I1108 00:14:05.190648   50022 fix.go:56] fixHost completed within 22.904612851s
	I1108 00:14:05.190673   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.193716   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194165   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.194195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194480   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.194725   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.194929   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.195106   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.195260   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:05.195755   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:05.195778   50022 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:14:05.326443   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402445.269657345
	
	I1108 00:14:05.326467   50022 fix.go:206] guest clock: 1699402445.269657345
	I1108 00:14:05.326476   50022 fix.go:219] Guest: 2023-11-08 00:14:05.269657345 +0000 UTC Remote: 2023-11-08 00:14:05.190652611 +0000 UTC m=+370.589908297 (delta=79.004734ms)
	I1108 00:14:05.326524   50022 fix.go:190] guest clock delta is within tolerance: 79.004734ms
	I1108 00:14:05.326531   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 23.040527062s
	I1108 00:14:05.326558   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.326845   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:05.329775   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330225   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.330254   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330447   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331102   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331338   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331424   50022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:14:05.331467   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.331584   50022 ssh_runner.go:195] Run: cat /version.json
	I1108 00:14:05.331610   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.334586   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.334817   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335125   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335182   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335225   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335307   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335339   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335418   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335536   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335603   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.335774   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335783   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.335906   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.336063   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.423679   50022 ssh_runner.go:195] Run: systemctl --version
	I1108 00:14:05.446956   50022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:14:05.598713   50022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:14:05.605558   50022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:14:05.605641   50022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:14:05.620183   50022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:14:05.620211   50022 start.go:472] detecting cgroup driver to use...
	I1108 00:14:05.620277   50022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:14:05.635981   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:14:05.649637   50022 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:14:05.649699   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:14:05.664232   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:14:05.678205   50022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:14:05.791991   50022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:14:05.925002   50022 docker.go:219] disabling docker service ...
	I1108 00:14:05.925135   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:14:05.939853   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:14:05.955518   50022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:14:06.074872   50022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:14:06.189371   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:14:06.202247   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:14:06.219012   50022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1108 00:14:06.219082   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.229837   50022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:14:06.229911   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.239769   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.248633   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.257717   50022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:14:06.268893   50022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:14:06.277427   50022 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:14:06.277495   50022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:14:06.290771   50022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:14:06.299918   50022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:14:06.421038   50022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:14:06.587544   50022 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:14:06.587624   50022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:14:06.592726   50022 start.go:540] Will wait 60s for crictl version
	I1108 00:14:06.592781   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:06.596695   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:14:06.637642   50022 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:14:06.637733   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.690026   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.740455   50022 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1108 00:14:03.266720   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.764837   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.322160   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:07.329491   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:06.741799   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:06.744301   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744599   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:06.744630   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744861   50022 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1108 00:14:06.749385   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:06.762645   50022 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1108 00:14:06.762732   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:06.804386   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:06.804458   50022 ssh_runner.go:195] Run: which lz4
	I1108 00:14:06.808948   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:14:06.813319   50022 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:14:06.813355   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1108 00:14:08.476578   50022 crio.go:444] Took 1.667668 seconds to copy over tarball
	I1108 00:14:08.476646   50022 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:14:09.078810   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.078843   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.078859   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.140049   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.140083   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.641000   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.647216   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:09.647247   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.140446   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.148995   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:10.149028   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.640719   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.649076   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:14:10.660508   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:14:10.660545   51228 api_server.go:131] duration metric: took 5.747263547s to wait for apiserver health ...
	I1108 00:14:10.660556   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:14:10.660566   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:10.662644   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:10.664069   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:10.682131   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:10.709582   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:10.725779   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:14:10.725840   51228 system_pods.go:61] "coredns-5dd5756b68-rz9t4" [d7b24f41-ed9e-4b07-991b-8587f49d7902] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:14:10.725854   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [f58b5fbb-a565-4d47-8b3d-ea62169dc0fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:14:10.725868   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [d0c3391c-679f-49ad-a6ff-ef62d74a62ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:14:10.725882   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [33f54c9b-cc67-4662-8db9-c735fde4d9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:14:10.725903   51228 system_pods.go:61] "kube-proxy-z7b8g" [079a28b1-dbad-4e62-a9ea-b667206433cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:14:10.725914   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [629f940b-6d2a-4c3c-8a11-2805dc2c04d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:14:10.725927   51228 system_pods.go:61] "metrics-server-57f55c9bc5-nlhpn" [f5d69cb1-4266-45fc-9bab-57053f915aa0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:14:10.725941   51228 system_pods.go:61] "storage-provisioner" [fb6541da-3ed3-4abb-b534-643bb5faf7d3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:14:10.725953   51228 system_pods.go:74] duration metric: took 16.346941ms to wait for pod list to return data ...
	I1108 00:14:10.725965   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:10.730466   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:10.730555   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:10.730574   51228 node_conditions.go:105] duration metric: took 4.602969ms to run NodePressure ...
	I1108 00:14:10.730595   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:07.772448   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:10.267241   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:09.824633   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.829090   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.015104   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.781938   50022 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.305246635s)
	I1108 00:14:11.781979   50022 crio.go:451] Took 3.305377 seconds to extract the tarball
	I1108 00:14:11.781999   50022 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:14:11.837911   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:11.907599   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:11.907634   50022 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:14:11.907702   50022 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.907965   50022 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.907983   50022 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.907966   50022 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.908257   50022 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.908365   50022 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1108 00:14:11.909163   50022 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.909239   50022 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.909251   50022 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.909332   50022 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.909171   50022 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.909397   50022 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.909435   50022 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.909625   50022 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1108 00:14:12.040043   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.042004   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1108 00:14:12.047478   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.051016   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.095045   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.126645   50022 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1108 00:14:12.126718   50022 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.126788   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.133035   50022 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1108 00:14:12.133078   50022 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1108 00:14:12.133120   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.164621   50022 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1108 00:14:12.164686   50022 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.164754   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.182223   50022 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1108 00:14:12.182267   50022 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.182318   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201151   50022 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1108 00:14:12.201196   50022 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.201244   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201255   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.201306   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1108 00:14:12.201305   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.201341   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.203375   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.208529   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.341873   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1108 00:14:12.341901   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1108 00:14:12.341954   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.341960   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1108 00:14:12.356561   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1108 00:14:12.356663   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.361927   50022 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1108 00:14:12.361962   50022 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.362023   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.382770   50022 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1108 00:14:12.382819   50022 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.382864   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.406169   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1108 00:14:12.406213   50022 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1108 00:14:12.406228   50022 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406273   50022 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406313   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.406274   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.863910   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:14.488498   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0: (2.082152502s)
	I1108 00:14:14.488536   50022 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.082234083s)
	I1108 00:14:14.488548   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1108 00:14:14.488571   50022 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1108 00:14:14.488623   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0: (2.082249259s)
	I1108 00:14:14.488666   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1108 00:14:14.488711   50022 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.624766966s)
	I1108 00:14:14.488762   50022 cache_images.go:92] LoadImages completed in 2.581114029s
	W1108 00:14:14.488842   50022 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1108 00:14:14.488915   50022 ssh_runner.go:195] Run: crio config
	I1108 00:14:14.557127   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:14.557155   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:14.557176   50022 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:14:14.557204   50022 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.49 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-590541 NodeName:old-k8s-version-590541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1108 00:14:14.557391   50022 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-590541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-590541
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.49:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:14:14.557508   50022 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-590541 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:14:14.557579   50022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1108 00:14:14.568423   50022 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:14:14.568501   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:14:14.578581   50022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1108 00:14:14.596389   50022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:14:14.613956   50022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1108 00:14:14.631988   50022 ssh_runner.go:195] Run: grep 192.168.50.49	control-plane.minikube.internal$ /etc/hosts
	I1108 00:14:14.636236   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:14.648849   50022 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541 for IP: 192.168.50.49
	I1108 00:14:14.648888   50022 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:14:14.649071   50022 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:14:14.649126   50022 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:14:14.649231   50022 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.key
	I1108 00:14:14.649312   50022 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key.5b7c76e3
	I1108 00:14:14.649375   50022 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key
	I1108 00:14:14.649542   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:14:14.649587   50022 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:14:14.649597   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:14:14.649636   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:14:14.649677   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:14:14.649714   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:14:14.649771   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:11.058474   51228 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064805   51228 kubeadm.go:787] kubelet initialised
	I1108 00:14:11.064852   51228 kubeadm.go:788] duration metric: took 6.346592ms waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064863   51228 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:14:11.073499   51228 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.089759   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089791   51228 pod_ready.go:81] duration metric: took 16.257238ms waiting for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.089803   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089811   51228 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.100580   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100605   51228 pod_ready.go:81] duration metric: took 10.783802ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.100615   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100621   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.113797   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113826   51228 pod_ready.go:81] duration metric: took 13.195367ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.113838   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113847   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.124704   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124736   51228 pod_ready.go:81] duration metric: took 10.87946ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.124750   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124760   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915650   51228 pod_ready.go:92] pod "kube-proxy-z7b8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:11.915674   51228 pod_ready.go:81] duration metric: took 790.904941ms waiting for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915686   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:14.011244   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:12.537889   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.767882   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:16.322840   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.323955   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.650662   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:14:14.682536   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 00:14:14.708618   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:14:14.737947   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 00:14:14.768365   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:14:14.795469   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:14:14.824086   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:14:14.851375   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:14:14.878638   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:14:14.906647   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:14:14.933316   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:14:14.961937   50022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:14:14.980167   50022 ssh_runner.go:195] Run: openssl version
	I1108 00:14:14.986053   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:14:14.996201   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001410   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001490   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.008681   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:14:15.022034   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:14:15.031992   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037854   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037910   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.045107   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:14:15.057464   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:14:15.070137   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075848   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075917   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.083414   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:14:15.094499   50022 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:14:15.099437   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:14:15.105940   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:14:15.112527   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:14:15.118429   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:14:15.124769   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:14:15.130975   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:14:15.136772   50022 kubeadm.go:404] StartCluster: {Name:old-k8s-version-590541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:14:15.136903   50022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:14:15.136952   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:15.184018   50022 cri.go:89] found id: ""
	I1108 00:14:15.184095   50022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:14:15.196900   50022 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:14:15.196924   50022 kubeadm.go:636] restartCluster start
	I1108 00:14:15.196994   50022 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:14:15.208810   50022 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.210399   50022 kubeconfig.go:92] found "old-k8s-version-590541" server: "https://192.168.50.49:8443"
	I1108 00:14:15.214114   50022 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:14:15.223586   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.223644   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.234506   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.234525   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.234565   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.244971   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.745626   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.745698   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.757830   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.246012   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.246090   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.258583   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.745965   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.746045   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.758317   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.245985   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.246087   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.257615   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.745646   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.745715   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.757591   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.245666   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.245773   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.258225   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.745765   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.745842   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.756699   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:19.245946   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.246016   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.258255   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.222461   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.722269   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:18.722291   51228 pod_ready.go:81] duration metric: took 6.806598217s waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:18.722300   51228 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:20.739081   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:17.264976   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.265242   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:21.265825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:20.822592   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.321115   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.745997   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.746135   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.757885   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.245884   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.245988   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.258408   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.745963   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.746035   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.757892   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.246052   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.246133   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.258401   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.745947   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.746040   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.759160   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.246004   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.246075   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.258859   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.745787   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.745889   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.758099   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.245961   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.246068   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.258810   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.745167   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.745248   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.757093   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:24.245690   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.245751   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.258264   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.739380   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.739502   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.766235   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:26.264779   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:25.322215   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:27.322896   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.745944   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.746024   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.759229   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:25.224130   50022 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:25.224188   50022 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:25.224207   50022 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:25.224267   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:25.271348   50022 cri.go:89] found id: ""
	I1108 00:14:25.271418   50022 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:25.287540   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:25.296398   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:25.296452   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305111   50022 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305137   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:25.434385   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.361847   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.561621   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.667973   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.798155   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:26.798240   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:26.822210   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.335493   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.836175   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.336398   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.836400   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.862790   50022 api_server.go:72] duration metric: took 2.064638513s to wait for apiserver process to appear ...
	I1108 00:14:28.862814   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:28.862827   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:26.740013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.740958   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.266931   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:30.765036   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:29.827237   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:32.323375   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.863452   50022 api_server.go:269] stopped: https://192.168.50.49:8443/healthz: Get "https://192.168.50.49:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 00:14:33.863495   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:34.513495   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:34.513530   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:31.240440   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.739764   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.014492   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.020991   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.021019   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:35.514559   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.521451   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.521475   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:36.014620   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:36.021243   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:14:36.029191   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:14:36.029214   50022 api_server.go:131] duration metric: took 7.166394703s to wait for apiserver health ...
	I1108 00:14:36.029225   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:36.029232   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:36.030800   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:32.765436   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:34.825199   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.322438   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:36.032078   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:36.042827   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:36.062239   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:36.070373   50022 system_pods.go:59] 7 kube-system pods found
	I1108 00:14:36.070404   50022 system_pods.go:61] "coredns-5644d7b6d9-cmx8s" [510a3ae2-abff-40f9-8605-7fd6cc5316de] Running
	I1108 00:14:36.070414   50022 system_pods.go:61] "etcd-old-k8s-version-590541" [4597d43f-d424-4591-8a5c-6e4a7d60bb2b] Running
	I1108 00:14:36.070420   50022 system_pods.go:61] "kube-apiserver-old-k8s-version-590541" [353c1157-7cac-4809-91ea-30745ecbc10c] Running
	I1108 00:14:36.070427   50022 system_pods.go:61] "kube-controller-manager-old-k8s-version-590541" [30679f8f-aa28-4349-ada1-97af45c0c065] Running
	I1108 00:14:36.070432   50022 system_pods.go:61] "kube-proxy-r8p96" [21ac95e4-595f-4520-8174-ef5e1334c1be] Running
	I1108 00:14:36.070437   50022 system_pods.go:61] "kube-scheduler-old-k8s-version-590541" [f406d277-d786-417a-9428-8433143db81c] Running
	I1108 00:14:36.070443   50022 system_pods.go:61] "storage-provisioner" [26f85033-bd24-4332-ba8d-1aed49559417] Running
	I1108 00:14:36.070452   50022 system_pods.go:74] duration metric: took 8.188793ms to wait for pod list to return data ...
	I1108 00:14:36.070461   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:36.075209   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:36.075242   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:36.075259   50022 node_conditions.go:105] duration metric: took 4.788324ms to run NodePressure ...
	I1108 00:14:36.075286   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:36.310748   50022 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:36.319886   50022 retry.go:31] will retry after 259.644928ms: kubelet not initialised
	I1108 00:14:36.584728   50022 retry.go:31] will retry after 259.541836ms: kubelet not initialised
	I1108 00:14:36.851013   50022 retry.go:31] will retry after 319.229418ms: kubelet not initialised
	I1108 00:14:37.192544   50022 retry.go:31] will retry after 949.166954ms: kubelet not initialised
	I1108 00:14:38.149087   50022 retry.go:31] will retry after 1.159461481s: kubelet not initialised
	I1108 00:14:39.313777   50022 retry.go:31] will retry after 1.441288405s: kubelet not initialised
	I1108 00:14:36.240206   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:38.240974   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.739451   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.266643   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.267727   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.765636   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.323180   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.323278   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:43.821724   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.762380   50022 retry.go:31] will retry after 2.811416386s: kubelet not initialised
	I1108 00:14:43.579217   50022 retry.go:31] will retry after 4.427599597s: kubelet not initialised
	I1108 00:14:42.739823   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.238841   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:44.266015   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:46.766564   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.822389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:47.822637   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:48.011401   50022 retry.go:31] will retry after 9.583320686s: kubelet not initialised
	I1108 00:14:47.239708   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.739520   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.264876   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.265467   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:50.321858   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:52.823189   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.740005   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:54.239137   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:53.267904   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.767709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.321381   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.600096   50022 retry.go:31] will retry after 8.628668417s: kubelet not initialised
	I1108 00:14:56.242527   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.740775   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.742908   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.263898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.264487   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:59.822276   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.322959   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.744271   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:05.239364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.764787   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.767529   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.821706   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.822611   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:08.822950   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.235557   50022 retry.go:31] will retry after 18.967803661s: kubelet not initialised
	I1108 00:15:07.239957   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.243640   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:07.268913   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.765546   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:10.823397   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:13.320774   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:11.741381   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.239143   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:12.265009   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.265329   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.265470   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:15.322148   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:17.821371   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.740364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.742058   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.267349   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:20.763380   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:19.821495   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.822583   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.239196   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:23.239716   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.740472   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:22.764934   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.264695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:24.322074   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:26.324255   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:28.823261   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.208456   50022 kubeadm.go:787] kubelet initialised
	I1108 00:15:25.208482   50022 kubeadm.go:788] duration metric: took 48.897709945s waiting for restarted kubelet to initialise ...
	I1108 00:15:25.208492   50022 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:15:25.213730   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220419   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.220444   50022 pod_ready.go:81] duration metric: took 6.688227ms waiting for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220455   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225713   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.225734   50022 pod_ready.go:81] duration metric: took 5.271879ms waiting for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225742   50022 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231081   50022 pod_ready.go:92] pod "etcd-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.231102   50022 pod_ready.go:81] duration metric: took 5.353373ms waiting for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231113   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235653   50022 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.235676   50022 pod_ready.go:81] duration metric: took 4.554135ms waiting for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235687   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607677   50022 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.607702   50022 pod_ready.go:81] duration metric: took 372.006515ms waiting for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607715   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007866   50022 pod_ready.go:92] pod "kube-proxy-r8p96" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.007901   50022 pod_ready.go:81] duration metric: took 400.175462ms waiting for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007915   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.408998   50022 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.409023   50022 pod_ready.go:81] duration metric: took 401.100386ms waiting for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.409037   50022 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:28.714602   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.743907   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.242025   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.764799   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:29.765943   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:31.322316   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.821723   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.715349   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.213961   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.739648   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.238544   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.270073   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:34.764272   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.768065   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.322383   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:38.821688   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.215842   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.714618   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.239003   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.239229   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.266142   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.765225   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.822847   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.823419   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.214573   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.214623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.239832   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.740100   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.765773   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.767613   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.323162   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:47.323716   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:44.714312   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.714541   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.214939   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.238097   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.240079   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.740404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.266155   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.821171   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.821247   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.821754   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.715388   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.214072   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.239902   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.240606   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:52.764709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.765802   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.821843   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.822037   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:56.214628   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:58.215873   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.739805   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.742442   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.264640   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.265598   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:01.269674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.823743   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.321221   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:00.716761   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.717300   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.240157   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.740325   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:03.765956   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.266810   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.322200   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.325043   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.822004   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:05.214678   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:07.214757   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.741067   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.238455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.764592   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:10.764740   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.321882   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.323997   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.715347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:12.215814   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.238960   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.239188   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.239933   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.268590   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.767860   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.822286   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.323447   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:14.715001   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.214864   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:19.220945   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.743653   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.239877   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.267403   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.765825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.828982   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:23.322508   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:21.715604   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.215532   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.240232   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.240410   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.767921   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.266374   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.821672   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.323033   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.715605   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.215673   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.240493   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.739795   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:27.268851   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.765296   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.822234   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.822653   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:31.714216   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.714677   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.238984   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.239828   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.264549   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.765297   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.823243   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.321349   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.715073   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.715879   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.240347   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.739526   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.265284   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.764898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.322588   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:41.822017   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:40.214804   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.714783   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.238649   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.238830   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.265404   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.266352   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.763687   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.321389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.322294   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.822670   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:45.215415   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:47.715215   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.239884   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.740698   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:50.740725   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.765820   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.265744   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.321664   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.321945   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:49.715720   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:52.215540   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.239897   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.241013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.764035   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.767704   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.324156   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.821380   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:54.716014   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.213472   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.216084   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.740250   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.740808   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:58.264915   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:00.764064   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.823358   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.824897   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.827668   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.714273   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.714538   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.238718   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:04.239300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.766695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:05.268491   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.321926   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.822906   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.215268   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.215344   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.740893   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.240404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:07.764370   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.764952   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.765807   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.823030   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.320640   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.715494   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.214139   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.741308   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.741849   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:14.265117   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.265550   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.322703   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.822360   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.214808   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.214944   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:19.215663   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.239627   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.241991   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.742074   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.764043   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.764244   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.322245   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:22.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:21.715000   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.715813   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.240800   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.741203   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.264974   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.267122   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:24.823144   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.322674   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:26.215099   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.215710   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.242151   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.741098   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.765060   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.266360   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:29.821467   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:31.822093   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.714747   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.716931   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:33.241199   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.744300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.765221   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.766163   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.320569   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:36.321680   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.321803   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.215458   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.715660   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.241103   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.241689   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.264893   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:39.264980   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:41.764589   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.323069   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.822323   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.214357   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.215838   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.738943   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.738995   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.265516   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.764435   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.827347   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:47.321911   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.715762   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.716679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.214899   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.740204   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.766668   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.266657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.822604   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.823333   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.935354   50505 pod_ready.go:81] duration metric: took 4m0.000854035s waiting for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:51.935397   50505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:51.935438   50505 pod_ready.go:38] duration metric: took 4m11.589382956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:51.935470   50505 kubeadm.go:640] restartCluster took 4m31.32204509s
	W1108 00:17:51.935533   50505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:51.935560   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:17:51.715171   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.716530   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.244682   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.741272   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.743900   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.765757   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.766672   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:56.218347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.715621   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.246553   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:00.740366   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.265496   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.958296   50613 pod_ready.go:81] duration metric: took 4m0.000224971s waiting for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:58.958324   50613 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:58.958349   50613 pod_ready.go:38] duration metric: took 4m11.678298333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:58.958373   50613 kubeadm.go:640] restartCluster took 4m32.361691152s
	W1108 00:17:58.958429   50613 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:58.958455   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:01.214685   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.216848   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.239882   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:05.739403   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:06.321352   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.385768547s)
	I1108 00:18:06.321435   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:06.335385   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:06.345310   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:06.355261   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:06.355301   50505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:06.570938   50505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:05.715384   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.716056   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.739455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.740028   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.716612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:12.215477   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:11.742123   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:14.242024   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:15.847386   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.888899647s)
	I1108 00:18:15.847471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:15.865800   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:15.877857   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:15.888952   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:15.889014   50613 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:16.126155   50613 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:17.730060   50505 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:17.730164   50505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:17.730282   50505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:17.730411   50505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:17.730564   50505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:17.730648   50505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:17.732613   50505 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:17.732709   50505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:17.732788   50505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:17.732916   50505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:17.732995   50505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:17.733104   50505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:17.733186   50505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:17.733265   50505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:17.733344   50505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:17.733429   50505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:17.733526   50505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:17.733572   50505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:17.733640   50505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:17.733699   50505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:17.733763   50505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:17.733838   50505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:17.733905   50505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:17.734002   50505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:17.734088   50505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:17.735708   50505 out.go:204]   - Booting up control plane ...
	I1108 00:18:17.735808   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:17.735898   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:17.735981   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:17.736113   50505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:17.736209   50505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:17.736255   50505 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:17.736431   50505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:17.736517   50505 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503639 seconds
	I1108 00:18:17.736637   50505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:17.736779   50505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:17.736873   50505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:17.737093   50505 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-320390 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:17.737168   50505 kubeadm.go:322] [bootstrap-token] Using token: 8lntxi.1hule2axpc9kkhcs
	I1108 00:18:17.738763   50505 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:17.738904   50505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:17.739014   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:17.739197   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:17.739364   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:17.739534   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:17.739651   50505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:17.739781   50505 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:17.739829   50505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:17.739881   50505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:17.739889   50505 kubeadm.go:322] 
	I1108 00:18:17.739956   50505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:17.739964   50505 kubeadm.go:322] 
	I1108 00:18:17.740051   50505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:17.740065   50505 kubeadm.go:322] 
	I1108 00:18:17.740094   50505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:17.740165   50505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:17.740229   50505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:17.740239   50505 kubeadm.go:322] 
	I1108 00:18:17.740311   50505 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:17.740320   50505 kubeadm.go:322] 
	I1108 00:18:17.740375   50505 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:17.740385   50505 kubeadm.go:322] 
	I1108 00:18:17.740443   50505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:17.740528   50505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:17.740629   50505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:17.740640   50505 kubeadm.go:322] 
	I1108 00:18:17.740733   50505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:17.740840   50505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:17.740860   50505 kubeadm.go:322] 
	I1108 00:18:17.740959   50505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741077   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:17.741106   50505 kubeadm.go:322] 	--control-plane 
	I1108 00:18:17.741114   50505 kubeadm.go:322] 
	I1108 00:18:17.741207   50505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:17.741221   50505 kubeadm.go:322] 
	I1108 00:18:17.741312   50505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741435   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:17.741451   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:18:17.741460   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:17.742996   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:17.744307   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:17.800065   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:17.844561   50505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:17.844628   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:17.844636   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=no-preload-320390 minikube.k8s.io/updated_at=2023_11_08T00_18_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.268124   50505 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:18.268268   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.391271   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.999821   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:14.715492   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.716036   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:19.217395   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.739748   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:18.722551   51228 pod_ready.go:81] duration metric: took 4m0.000232672s waiting for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:18.722600   51228 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:18:18.722616   51228 pod_ready.go:38] duration metric: took 4m7.657742468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:18.722637   51228 kubeadm.go:640] restartCluster took 4m28.262375275s
	W1108 00:18:18.722722   51228 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:18:18.722756   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:19.500069   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.000575   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.500545   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.999918   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.499960   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.000673   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.499811   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.000501   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.499942   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.000407   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.217427   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:23.715751   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:27.224428   50613 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:27.224497   50613 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:27.224589   50613 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:27.224720   50613 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:27.224916   50613 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:27.225019   50613 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:27.226893   50613 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:27.227001   50613 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:27.227091   50613 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:27.227201   50613 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:27.227279   50613 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:27.227365   50613 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:27.227433   50613 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:27.227517   50613 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:27.227602   50613 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:27.227719   50613 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:27.227808   50613 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:27.227864   50613 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:27.227938   50613 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:27.228013   50613 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:27.228102   50613 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:27.228186   50613 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:27.228264   50613 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:27.228387   50613 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:27.228479   50613 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:27.229827   50613 out.go:204]   - Booting up control plane ...
	I1108 00:18:27.229950   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:27.230032   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:27.230124   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:27.230265   50613 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:27.230387   50613 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:27.230447   50613 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:27.230699   50613 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:27.230810   50613 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503846 seconds
	I1108 00:18:27.230970   50613 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:27.231145   50613 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:27.231237   50613 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:27.231478   50613 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-253253 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:27.231573   50613 kubeadm.go:322] [bootstrap-token] Using token: vyjibp.12wjj754q6czu5uo
	I1108 00:18:27.233159   50613 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:27.233266   50613 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:27.233340   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:27.233454   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:27.233558   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:27.233693   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:27.233793   50613 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:27.233943   50613 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:27.234012   50613 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:27.234074   50613 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:27.234086   50613 kubeadm.go:322] 
	I1108 00:18:27.234174   50613 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:27.234191   50613 kubeadm.go:322] 
	I1108 00:18:27.234300   50613 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:27.234310   50613 kubeadm.go:322] 
	I1108 00:18:27.234337   50613 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:27.234388   50613 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:27.234432   50613 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:27.234436   50613 kubeadm.go:322] 
	I1108 00:18:27.234490   50613 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:27.234507   50613 kubeadm.go:322] 
	I1108 00:18:27.234567   50613 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:27.234577   50613 kubeadm.go:322] 
	I1108 00:18:27.234651   50613 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:27.234756   50613 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:27.234858   50613 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:27.234873   50613 kubeadm.go:322] 
	I1108 00:18:27.234959   50613 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:27.235056   50613 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:27.235066   50613 kubeadm.go:322] 
	I1108 00:18:27.235184   50613 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235334   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:27.235369   50613 kubeadm.go:322] 	--control-plane 
	I1108 00:18:27.235378   50613 kubeadm.go:322] 
	I1108 00:18:27.235476   50613 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:27.235487   50613 kubeadm.go:322] 
	I1108 00:18:27.235585   50613 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235734   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:27.235751   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:18:27.235759   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:27.237411   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:24.499703   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.999659   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:25.499724   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.000534   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.500532   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.999903   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.500582   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.000156   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.500443   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.000019   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.213623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:28.214432   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:29.500525   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.999698   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.173272   50505 kubeadm.go:1081] duration metric: took 12.328709999s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:30.173304   50505 kubeadm.go:406] StartCluster complete in 5m9.613679996s
	I1108 00:18:30.173323   50505 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.173399   50505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:30.175022   50505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.175277   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:30.175394   50505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:30.175512   50505 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320390"
	I1108 00:18:30.175534   50505 addons.go:231] Setting addon storage-provisioner=true in "no-preload-320390"
	W1108 00:18:30.175546   50505 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:30.175591   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.175595   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:30.175648   50505 addons.go:69] Setting default-storageclass=true in profile "no-preload-320390"
	I1108 00:18:30.175669   50505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320390"
	I1108 00:18:30.175856   50505 addons.go:69] Setting metrics-server=true in profile "no-preload-320390"
	I1108 00:18:30.175880   50505 addons.go:231] Setting addon metrics-server=true in "no-preload-320390"
	W1108 00:18:30.175890   50505 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:30.175932   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.176004   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176047   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176074   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176110   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176255   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176297   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.193487   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
	I1108 00:18:30.194065   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.194643   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I1108 00:18:30.194791   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.194809   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195197   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.195244   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195454   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1108 00:18:30.195741   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.195758   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195840   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195975   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.196019   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.196254   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.196377   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.196401   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.196444   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.196747   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.197318   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.197365   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.200432   50505 addons.go:231] Setting addon default-storageclass=true in "no-preload-320390"
	W1108 00:18:30.200454   50505 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:30.200482   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.200858   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.200904   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.214840   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
	I1108 00:18:30.215335   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.215693   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.215710   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.216018   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.216163   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.216761   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I1108 00:18:30.217467   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.218005   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.218255   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.218276   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.218567   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.218686   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.218895   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I1108 00:18:30.219282   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.221453   50505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:30.219887   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.220152   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.227122   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.227187   50505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.227203   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:30.227220   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.229126   50505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:30.227716   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.230458   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231018   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.231625   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.231640   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:30.231664   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231663   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:30.231687   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.231871   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.232040   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.232130   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.232164   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.232167   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.234984   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235307   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.235327   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235589   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.235819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.236102   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.236409   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.248939   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I1108 00:18:30.249596   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.250088   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.250105   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.250535   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.250715   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.252631   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.252909   50505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.252923   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:30.252941   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.255926   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256320   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.256354   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256440   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.256639   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.256795   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.257009   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.299537   50505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-320390" context rescaled to 1 replicas
	I1108 00:18:30.299586   50505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:30.301520   50505 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:27.238758   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:27.263679   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:27.350198   50613 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:27.350271   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.350293   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=embed-certs-253253 minikube.k8s.io/updated_at=2023_11_08T00_18_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.409145   50613 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:27.761874   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.882030   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.495425   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.995764   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.495154   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.994859   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.495492   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.995328   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:31.495353   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.303227   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:30.426941   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:30.426964   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:30.450862   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.456250   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.482239   50505 node_ready.go:35] waiting up to 6m0s for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.482286   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:30.493041   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:30.493073   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:30.542548   50505 node_ready.go:49] node "no-preload-320390" has status "Ready":"True"
	I1108 00:18:30.542579   50505 node_ready.go:38] duration metric: took 60.300148ms waiting for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.542593   50505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:30.554527   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:30.554560   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:30.648882   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:30.658134   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:32.959227   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.50832393s)
	I1108 00:18:32.959242   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.502960333s)
	I1108 00:18:32.959281   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959287   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476976723s)
	I1108 00:18:32.959301   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959347   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959307   50505 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:32.959293   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959711   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959729   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959748   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959761   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959771   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959780   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959795   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959807   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.960123   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960137   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.960207   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:32.960229   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960237   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.007609   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.007641   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.007926   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.007945   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.106167   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.284838   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.626637787s)
	I1108 00:18:33.284900   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.284916   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285239   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285259   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285269   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.285278   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285579   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285612   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285626   50505 addons.go:467] Verifying addon metrics-server=true in "no-preload-320390"
	I1108 00:18:33.285579   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:33.288563   50505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:18:33.290062   50505 addons.go:502] enable addons completed in 3.114669599s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:18:30.231324   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:32.715318   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.473926   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.751140561s)
	I1108 00:18:33.473999   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:33.489630   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:33.501413   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:33.513531   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:33.513588   51228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:33.767243   51228 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:31.995169   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.494991   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.995423   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.494761   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.995099   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.494829   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.995699   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.495034   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.995563   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:36.494752   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.563227   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:37.563703   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:34.715399   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.717212   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:39.215769   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.995285   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.495447   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.995529   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.494898   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.995450   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.494831   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.994880   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:40.097031   50613 kubeadm.go:1081] duration metric: took 12.746819294s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:40.097074   50613 kubeadm.go:406] StartCluster complete in 5m13.552864243s
	I1108 00:18:40.097102   50613 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.097182   50613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:40.099232   50613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.099513   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:40.099522   50613 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:40.099603   50613 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-253253"
	I1108 00:18:40.099612   50613 addons.go:69] Setting default-storageclass=true in profile "embed-certs-253253"
	I1108 00:18:40.099625   50613 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-253253"
	I1108 00:18:40.099626   50613 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-253253"
	W1108 00:18:40.099635   50613 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:40.099675   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.099724   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:40.099769   50613 addons.go:69] Setting metrics-server=true in profile "embed-certs-253253"
	I1108 00:18:40.099783   50613 addons.go:231] Setting addon metrics-server=true in "embed-certs-253253"
	W1108 00:18:40.099791   50613 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:40.099827   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.100063   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100064   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100085   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100086   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100199   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100229   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.117281   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I1108 00:18:40.117806   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.118339   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.118364   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.118717   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.118761   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1108 00:18:40.119093   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.119311   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.119334   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.119497   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.119520   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.119668   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1108 00:18:40.119841   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.119970   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.120403   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.120436   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.120443   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.120456   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.120895   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.121048   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.123728   50613 addons.go:231] Setting addon default-storageclass=true in "embed-certs-253253"
	W1108 00:18:40.123746   50613 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:40.123774   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.124049   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.124073   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.139787   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I1108 00:18:40.140217   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.140776   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.140799   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.141358   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.143152   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I1108 00:18:40.143448   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.144341   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.145156   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.145175   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.145536   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.145695   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.146126   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.146151   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.147863   50613 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:40.149252   50613 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.149270   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:40.149288   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.149701   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41685
	I1108 00:18:40.150096   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.150599   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.150613   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.151053   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.151223   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.152047   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152462   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.152476   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152718   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.152834   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.152927   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.153008   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.153394   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.155041   50613 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:40.156603   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:40.156625   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:40.156642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.159550   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.159952   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.159973   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.160151   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.160294   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.160403   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.160505   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.162863   50613 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-253253" context rescaled to 1 replicas
	I1108 00:18:40.162890   50613 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:40.164733   50613 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:40.166082   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:40.167562   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1108 00:18:40.167938   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.168414   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.168433   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.168805   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.169056   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.170751   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.171377   50613 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.171389   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:40.171402   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.174508   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.174826   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.174859   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.175035   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.175182   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.175341   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.175467   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.387003   50613 node_ready.go:35] waiting up to 6m0s for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.387126   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:40.398413   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:40.398489   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:40.400162   50613 node_ready.go:49] node "embed-certs-253253" has status "Ready":"True"
	I1108 00:18:40.400189   50613 node_ready.go:38] duration metric: took 13.150355ms waiting for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.400204   50613 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:40.416263   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.420346   50613 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:40.441486   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.468701   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:40.468731   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:40.546438   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:40.546475   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:40.620999   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:41.963134   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.575984932s)
	I1108 00:18:41.963222   50613 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:41.963099   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.546802194s)
	I1108 00:18:41.963311   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963342   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.963771   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.963821   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.963843   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963862   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.964176   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.964202   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.964188   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.997903   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.997987   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.998341   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.998428   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.998487   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.447761   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006222409s)
	I1108 00:18:42.447810   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.447824   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.448092   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.448109   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.448110   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.448127   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.448143   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.449994   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.450013   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.450027   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.484250   50613 pod_ready.go:102] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:42.788997   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.167954058s)
	I1108 00:18:42.789042   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789342   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.789395   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789416   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789427   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789673   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789698   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789709   50613 addons.go:467] Verifying addon metrics-server=true in "embed-certs-253253"
	I1108 00:18:42.792162   50613 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:39.563860   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.565166   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:44.063902   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.216274   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:43.717636   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:45.631283   51228 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:45.631354   51228 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:45.631464   51228 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:45.631583   51228 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:45.631736   51228 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:45.631848   51228 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:45.633488   51228 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:45.633579   51228 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:45.633656   51228 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:45.633756   51228 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:45.633840   51228 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:45.633947   51228 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:45.634041   51228 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:45.634140   51228 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:45.634244   51228 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:45.634357   51228 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:45.634458   51228 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:45.634541   51228 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:45.634625   51228 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:45.634713   51228 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:45.634781   51228 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:45.634865   51228 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:45.634935   51228 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:45.635044   51228 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:45.635133   51228 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:45.636666   51228 out.go:204]   - Booting up control plane ...
	I1108 00:18:45.636755   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:45.636862   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:45.636939   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:45.637065   51228 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:45.637164   51228 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:45.637221   51228 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:45.637410   51228 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:45.637479   51228 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005347 seconds
	I1108 00:18:45.637583   51228 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:45.637710   51228 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:45.637782   51228 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:45.637961   51228 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-039263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:45.638007   51228 kubeadm.go:322] [bootstrap-token] Using token: ub1ww5.kh6zrwfrcg8jc9rc
	I1108 00:18:45.639491   51228 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:45.639627   51228 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:45.639743   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:45.639918   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:45.640060   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:45.640240   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:45.640344   51228 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:45.640487   51228 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:45.640546   51228 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:45.640625   51228 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:45.640643   51228 kubeadm.go:322] 
	I1108 00:18:45.640726   51228 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:45.640737   51228 kubeadm.go:322] 
	I1108 00:18:45.640850   51228 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:45.640860   51228 kubeadm.go:322] 
	I1108 00:18:45.640891   51228 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:45.640968   51228 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:45.641042   51228 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:45.641048   51228 kubeadm.go:322] 
	I1108 00:18:45.641124   51228 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:45.641137   51228 kubeadm.go:322] 
	I1108 00:18:45.641193   51228 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:45.641204   51228 kubeadm.go:322] 
	I1108 00:18:45.641266   51228 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:45.641372   51228 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:45.641485   51228 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:45.641493   51228 kubeadm.go:322] 
	I1108 00:18:45.641589   51228 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:45.641704   51228 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:45.641714   51228 kubeadm.go:322] 
	I1108 00:18:45.641815   51228 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.641939   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:45.641971   51228 kubeadm.go:322] 	--control-plane 
	I1108 00:18:45.641979   51228 kubeadm.go:322] 
	I1108 00:18:45.642084   51228 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:45.642093   51228 kubeadm.go:322] 
	I1108 00:18:45.642216   51228 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.642356   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:45.642372   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:18:45.642379   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:45.644712   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:45.646211   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:45.672621   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:45.700061   51228 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:45.700142   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.700153   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=default-k8s-diff-port-039263 minikube.k8s.io/updated_at=2023_11_08T00_18_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.805900   51228 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:42.794167   50613 addons.go:502] enable addons completed in 2.694639707s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:18:44.953906   50613 pod_ready.go:92] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.953928   50613 pod_ready.go:81] duration metric: took 4.533558234s waiting for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.953936   50613 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958854   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.958880   50613 pod_ready.go:81] duration metric: took 4.937561ms waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958892   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964282   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.964305   50613 pod_ready.go:81] duration metric: took 5.40486ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964317   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969544   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.969561   50613 pod_ready.go:81] duration metric: took 5.237377ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969568   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974340   50613 pod_ready.go:92] pod "kube-proxy-shp9z" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.974357   50613 pod_ready.go:81] duration metric: took 4.78369ms waiting for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974367   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350442   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.350465   50613 pod_ready.go:81] duration metric: took 376.091394ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350473   50613 pod_ready.go:38] duration metric: took 4.950259719s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.350487   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.350529   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.366477   50613 api_server.go:72] duration metric: took 5.203563902s to wait for apiserver process to appear ...
	I1108 00:18:45.366502   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.366519   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:18:45.375074   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:18:45.376646   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.376666   50613 api_server.go:131] duration metric: took 10.158963ms to wait for apiserver health ...
	I1108 00:18:45.376674   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.554560   50613 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.554598   50613 system_pods.go:61] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.554605   50613 system_pods.go:61] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.554611   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.554618   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.554624   50613 system_pods.go:61] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.554635   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.554655   50613 system_pods.go:61] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.554697   50613 system_pods.go:61] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.554712   50613 system_pods.go:74] duration metric: took 178.032339ms to wait for pod list to return data ...
	I1108 00:18:45.554722   50613 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.750181   50613 default_sa.go:45] found service account: "default"
	I1108 00:18:45.750210   50613 default_sa.go:55] duration metric: took 195.480878ms for default service account to be created ...
	I1108 00:18:45.750220   50613 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.953261   50613 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.953303   50613 system_pods.go:89] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.953312   50613 system_pods.go:89] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.953320   50613 system_pods.go:89] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.953329   50613 system_pods.go:89] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.953348   50613 system_pods.go:89] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.953360   50613 system_pods.go:89] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.953375   50613 system_pods.go:89] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.953387   50613 system_pods.go:89] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.953402   50613 system_pods.go:126] duration metric: took 203.174777ms to wait for k8s-apps to be running ...
	I1108 00:18:45.953414   50613 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.953471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.969669   50613 system_svc.go:56] duration metric: took 16.24852ms WaitForService to wait for kubelet.
	I1108 00:18:45.969698   50613 kubeadm.go:581] duration metric: took 5.806787278s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.969720   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.150807   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.150839   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.150853   50613 node_conditions.go:105] duration metric: took 181.127043ms to run NodePressure ...
	I1108 00:18:46.150866   50613 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.150876   50613 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.150886   50613 start.go:242] writing updated cluster config ...
	I1108 00:18:46.151185   50613 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.209047   50613 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.211074   50613 out.go:177] * Done! kubectl is now configured to use "embed-certs-253253" cluster and "default" namespace by default
	I1108 00:18:44.564102   50505 pod_ready.go:97] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerS
tateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564132   50505 pod_ready.go:81] duration metric: took 13.91522436s waiting for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:44.564147   50505 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runni
ng:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564158   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573431   50505 pod_ready.go:92] pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.573462   50505 pod_ready.go:81] duration metric: took 9.295648ms waiting for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573473   50505 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580792   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.580828   50505 pod_ready.go:81] duration metric: took 7.346504ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580840   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587095   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.587117   50505 pod_ready.go:81] duration metric: took 6.268891ms waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587130   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594022   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.594039   50505 pod_ready.go:81] duration metric: took 6.901477ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594052   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960144   50505 pod_ready.go:92] pod "kube-proxy-m6k8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.960162   50505 pod_ready.go:81] duration metric: took 366.102529ms waiting for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960173   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361366   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.361388   50505 pod_ready.go:81] duration metric: took 401.208779ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361396   50505 pod_ready.go:38] duration metric: took 14.818791823s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.361408   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.361453   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.377632   50505 api_server.go:72] duration metric: took 15.078013421s to wait for apiserver process to appear ...
	I1108 00:18:45.377656   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.377673   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:18:45.383912   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:18:45.385131   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.385153   50505 api_server.go:131] duration metric: took 7.489916ms to wait for apiserver health ...
	I1108 00:18:45.385163   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.565081   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.565112   50505 system_pods.go:61] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.565120   50505 system_pods.go:61] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.565127   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.565134   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.565141   50505 system_pods.go:61] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.565149   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.565157   50505 system_pods.go:61] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.565171   50505 system_pods.go:61] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.565185   50505 system_pods.go:74] duration metric: took 180.015317ms to wait for pod list to return data ...
	I1108 00:18:45.565196   50505 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.760190   50505 default_sa.go:45] found service account: "default"
	I1108 00:18:45.760217   50505 default_sa.go:55] duration metric: took 195.014175ms for default service account to be created ...
	I1108 00:18:45.760227   50505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.966186   50505 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.966223   50505 system_pods.go:89] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.966231   50505 system_pods.go:89] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.966239   50505 system_pods.go:89] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.966245   50505 system_pods.go:89] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.966252   50505 system_pods.go:89] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.966259   50505 system_pods.go:89] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.966268   50505 system_pods.go:89] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.966279   50505 system_pods.go:89] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.966294   50505 system_pods.go:126] duration metric: took 206.05956ms to wait for k8s-apps to be running ...
	I1108 00:18:45.966305   50505 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.966355   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.984753   50505 system_svc.go:56] duration metric: took 18.427005ms WaitForService to wait for kubelet.
	I1108 00:18:45.984781   50505 kubeadm.go:581] duration metric: took 15.685164805s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.984803   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.159568   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.159602   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.159615   50505 node_conditions.go:105] duration metric: took 174.805156ms to run NodePressure ...
	I1108 00:18:46.159627   50505 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.159636   50505 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.159649   50505 start.go:242] writing updated cluster config ...
	I1108 00:18:46.159934   50505 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.220234   50505 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.222217   50505 out.go:177] * Done! kubectl is now configured to use "no-preload-320390" cluster and "default" namespace by default
	I1108 00:18:46.222047   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:48.714709   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:46.109921   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.223968   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.849987   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.349982   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.850871   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.350081   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.850494   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.350809   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.850515   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.350227   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.850044   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.714976   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:53.214612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:51.350594   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:51.850705   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.349971   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.850530   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.350696   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.850039   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.350523   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.849805   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.350560   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.849890   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.350679   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.849863   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.350004   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.850463   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.349999   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.850810   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.958213   51228 kubeadm.go:1081] duration metric: took 13.258132625s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:58.958253   51228 kubeadm.go:406] StartCluster complete in 5m8.559036824s
	I1108 00:18:58.958281   51228 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.958371   51228 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:58.960083   51228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.960306   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:58.960417   51228 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:58.960497   51228 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960505   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:58.960517   51228 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960544   51228 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-039263"
	I1108 00:18:58.960521   51228 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-039263"
	I1108 00:18:58.960538   51228 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960588   51228 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.960607   51228 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:58.960654   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	W1108 00:18:58.960566   51228 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:58.960732   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.961043   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961079   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961112   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961115   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961155   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961164   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.980365   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I1108 00:18:58.980386   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I1108 00:18:58.980512   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I1108 00:18:58.980860   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980912   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980863   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.981328   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981350   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981466   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981477   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981483   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981863   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.982023   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:58.982419   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982429   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982447   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.982464   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.985852   51228 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.985875   51228 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:58.985902   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.986359   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.986390   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.996161   51228 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-039263" context rescaled to 1 replicas
	I1108 00:18:58.996200   51228 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:58.998257   51228 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:58.999857   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:58.999917   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I1108 00:18:58.998777   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1108 00:18:59.000380   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001040   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001093   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001205   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001478   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.001674   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001690   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001762   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.002038   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.002209   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.003822   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006057   51228 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:59.004254   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006174   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I1108 00:18:59.007678   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:59.007688   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:59.007706   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.009545   51228 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:55.714548   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:57.715173   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:59.007989   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.010470   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.010632   51228 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.010640   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:59.010653   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.011015   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.011039   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.011227   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.011250   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.011650   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.011657   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.012158   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:59.012188   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:59.012671   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.012805   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.012925   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.013938   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014329   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.014348   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014493   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.014645   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.014770   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.014879   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.030160   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I1108 00:18:59.030558   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.031087   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.031101   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.031353   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.031558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.033203   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.033540   51228 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.033556   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:59.033573   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.036749   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.037177   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.037551   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.037684   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.037791   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.349254   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.451588   51228 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.451664   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:59.464584   51228 node_ready.go:49] node "default-k8s-diff-port-039263" has status "Ready":"True"
	I1108 00:18:59.464616   51228 node_ready.go:38] duration metric: took 12.97792ms waiting for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.464629   51228 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:59.475428   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:59.481740   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.483627   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:59.483644   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:59.599214   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:59.599244   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:59.661512   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:59.661537   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:59.726775   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:01.455332   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.003642063s)
	I1108 00:19:01.455368   51228 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1108 00:19:01.455575   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.106281369s)
	I1108 00:19:01.455635   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.455659   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.455957   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456004   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456026   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.456048   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.456296   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456332   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456339   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.485941   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.485970   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.486229   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.486287   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.486294   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.599500   51228 pod_ready.go:102] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:01.893463   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.411687372s)
	I1108 00:19:01.893518   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893530   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.893844   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.893887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.893904   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.893918   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893928   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.894199   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.894215   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.421714   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694889947s)
	I1108 00:19:02.421768   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.421785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422098   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422123   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422141   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.422160   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422138   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422467   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422480   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422492   51228 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-039263"
	I1108 00:19:02.424446   51228 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:59.715708   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.214990   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.426041   51228 addons.go:502] enable addons completed in 3.465624772s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:19:02.549025   51228 pod_ready.go:97] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:
2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549056   51228 pod_ready.go:81] duration metric: took 3.073604936s waiting for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:02.549069   51228 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&Conta
inerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549076   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096421   51228 pod_ready.go:92] pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.096449   51228 pod_ready.go:81] duration metric: took 547.365037ms waiting for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096461   51228 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104473   51228 pod_ready.go:92] pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.104497   51228 pod_ready.go:81] duration metric: took 8.028055ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104509   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108940   51228 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.108965   51228 pod_ready.go:81] duration metric: took 4.447315ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108976   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458803   51228 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.458831   51228 pod_ready.go:81] duration metric: took 349.845574ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458844   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256435   51228 pod_ready.go:92] pod "kube-proxy-rhdhg" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.256457   51228 pod_ready.go:81] duration metric: took 797.605956ms waiting for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256466   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655727   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.655750   51228 pod_ready.go:81] duration metric: took 399.277263ms waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655758   51228 pod_ready.go:38] duration metric: took 5.191103655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:04.655772   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:19:04.655823   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:19:04.671030   51228 api_server.go:72] duration metric: took 5.674798555s to wait for apiserver process to appear ...
	I1108 00:19:04.671059   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:19:04.671076   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:19:04.677315   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:19:04.678430   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:19:04.678451   51228 api_server.go:131] duration metric: took 7.384898ms to wait for apiserver health ...
	I1108 00:19:04.678457   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:19:04.866585   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:19:04.866617   51228 system_pods.go:61] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:04.866622   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:04.866626   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:04.866631   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:04.866635   51228 system_pods.go:61] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:04.866639   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:04.866666   51228 system_pods.go:61] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:04.866676   51228 system_pods.go:61] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:04.866684   51228 system_pods.go:74] duration metric: took 188.222131ms to wait for pod list to return data ...
	I1108 00:19:04.866691   51228 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:19:05.056224   51228 default_sa.go:45] found service account: "default"
	I1108 00:19:05.056251   51228 default_sa.go:55] duration metric: took 189.551289ms for default service account to be created ...
	I1108 00:19:05.056263   51228 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:19:05.259774   51228 system_pods.go:86] 8 kube-system pods found
	I1108 00:19:05.259800   51228 system_pods.go:89] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:05.259805   51228 system_pods.go:89] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:05.259810   51228 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:05.259814   51228 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:05.259818   51228 system_pods.go:89] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:05.259822   51228 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:05.259828   51228 system_pods.go:89] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:05.259832   51228 system_pods.go:89] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:05.259840   51228 system_pods.go:126] duration metric: took 203.572791ms to wait for k8s-apps to be running ...
	I1108 00:19:05.259846   51228 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:19:05.259889   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:05.274254   51228 system_svc.go:56] duration metric: took 14.400341ms WaitForService to wait for kubelet.
	I1108 00:19:05.274277   51228 kubeadm.go:581] duration metric: took 6.278053459s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:19:05.274304   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:19:05.457057   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:19:05.457086   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:19:05.457097   51228 node_conditions.go:105] duration metric: took 182.787127ms to run NodePressure ...
	I1108 00:19:05.457107   51228 start.go:228] waiting for startup goroutines ...
	I1108 00:19:05.457113   51228 start.go:233] waiting for cluster config update ...
	I1108 00:19:05.457122   51228 start.go:242] writing updated cluster config ...
	I1108 00:19:05.457358   51228 ssh_runner.go:195] Run: rm -f paused
	I1108 00:19:05.507414   51228 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:19:05.509695   51228 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-039263" cluster and "default" namespace by default
	I1108 00:19:04.715259   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:07.214815   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:09.214886   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:11.715679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:14.215690   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:16.716315   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:19.215323   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:21.715872   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:24.215543   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:26.409609   50022 pod_ready.go:81] duration metric: took 4m0.000552573s waiting for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:26.409644   50022 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:19:26.409659   50022 pod_ready.go:38] duration metric: took 4m1.201158343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:26.409684   50022 kubeadm.go:640] restartCluster took 5m11.212754497s
	W1108 00:19:26.409757   50022 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:19:26.409790   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:19:31.401367   50022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.991549602s)
	I1108 00:19:31.401473   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:31.415823   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:19:31.425384   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:19:31.435585   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:19:31.435635   50022 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1108 00:19:31.492015   50022 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1108 00:19:31.492120   50022 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:19:31.649293   50022 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:19:31.649437   50022 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:19:31.649605   50022 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:19:31.886799   50022 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:19:31.886955   50022 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:19:31.896062   50022 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1108 00:19:32.038269   50022 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:19:32.040677   50022 out.go:204]   - Generating certificates and keys ...
	I1108 00:19:32.040833   50022 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:19:32.040945   50022 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:19:32.041037   50022 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:19:32.041085   50022 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:19:32.041142   50022 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:19:32.041231   50022 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:19:32.041346   50022 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:19:32.041441   50022 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:19:32.041594   50022 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:19:32.042173   50022 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:19:32.042236   50022 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:19:32.042302   50022 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:19:32.325005   50022 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:19:32.544755   50022 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:19:32.726539   50022 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:19:32.905403   50022 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:19:32.906525   50022 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:19:32.908371   50022 out.go:204]   - Booting up control plane ...
	I1108 00:19:32.908514   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:19:32.919163   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:19:32.919256   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:19:32.919387   50022 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:19:32.928261   50022 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:19:42.937037   50022 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.006146 seconds
	I1108 00:19:42.937215   50022 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:19:42.955795   50022 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:19:43.479726   50022 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:19:43.479868   50022 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-590541 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1108 00:19:43.989897   50022 kubeadm.go:322] [bootstrap-token] Using token: rpiq38.6eoemv6ygv6ghnel
	I1108 00:19:43.991262   50022 out.go:204]   - Configuring RBAC rules ...
	I1108 00:19:43.991391   50022 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:19:44.001502   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:19:44.006931   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:19:44.012505   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:19:44.021422   50022 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:19:44.111517   50022 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:19:44.412934   50022 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:19:44.412985   50022 kubeadm.go:322] 
	I1108 00:19:44.413073   50022 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:19:44.413088   50022 kubeadm.go:322] 
	I1108 00:19:44.413186   50022 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:19:44.413196   50022 kubeadm.go:322] 
	I1108 00:19:44.413230   50022 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:19:44.413317   50022 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:19:44.413388   50022 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:19:44.413398   50022 kubeadm.go:322] 
	I1108 00:19:44.413489   50022 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:19:44.413608   50022 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:19:44.413704   50022 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:19:44.413720   50022 kubeadm.go:322] 
	I1108 00:19:44.413851   50022 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1108 00:19:44.413974   50022 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:19:44.413988   50022 kubeadm.go:322] 
	I1108 00:19:44.414090   50022 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414288   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:19:44.414337   50022 kubeadm.go:322]     --control-plane 	  
	I1108 00:19:44.414347   50022 kubeadm.go:322] 
	I1108 00:19:44.414458   50022 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:19:44.414474   50022 kubeadm.go:322] 
	I1108 00:19:44.414593   50022 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414754   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:19:44.416038   50022 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:19:44.416063   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:19:44.416073   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:19:44.417877   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:19:44.419195   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:19:44.448380   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:19:44.474228   50022 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:19:44.474339   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.474380   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=old-k8s-version-590541 minikube.k8s.io/updated_at=2023_11_08T00_19_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.739449   50022 ops.go:34] apiserver oom_adj: -16
	I1108 00:19:44.739605   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.848712   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.444347   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.944721   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.444140   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.944185   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.444342   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.944227   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.443941   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.944002   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.444440   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.943801   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.444481   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.944720   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.443857   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.943755   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.444663   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.944052   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.443917   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.943763   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.443886   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.944615   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.444156   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.944693   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.443823   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.944727   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.444188   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.943966   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.444659   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.944651   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:59.061808   50022 kubeadm.go:1081] duration metric: took 14.587519972s to wait for elevateKubeSystemPrivileges.
	I1108 00:19:59.061855   50022 kubeadm.go:406] StartCluster complete in 5m43.925088245s
	I1108 00:19:59.061878   50022 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.061962   50022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:19:59.063740   50022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.064004   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:19:59.064107   50022 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:19:59.064182   50022 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064198   50022 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064213   50022 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-590541"
	W1108 00:19:59.064222   50022 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:19:59.064224   50022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-590541"
	I1108 00:19:59.064233   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:19:59.064236   50022 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064260   50022 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:19:59.064265   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	W1108 00:19:59.064274   50022 addons.go:240] addon metrics-server should already be in state true
	I1108 00:19:59.064406   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.064720   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064757   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064761   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.064797   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.065271   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.065309   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.082041   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
	I1108 00:19:59.082534   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.083051   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.083075   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.083432   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.083970   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.084022   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.084099   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I1108 00:19:59.084222   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I1108 00:19:59.084440   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084605   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084870   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.084887   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085151   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.085174   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085248   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.085427   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.085480   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.086399   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.086442   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.090677   50022 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-590541"
	W1108 00:19:59.090700   50022 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:19:59.090728   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.091092   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.091130   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.101788   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I1108 00:19:59.102208   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.102631   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.102648   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.103029   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.103219   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.104809   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I1108 00:19:59.104937   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.106844   50022 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:19:59.105475   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.108350   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:19:59.108374   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:19:59.108403   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.108551   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I1108 00:19:59.108910   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.108930   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.109878   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.109881   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.110039   50022 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-590541" context rescaled to 1 replicas
	I1108 00:19:59.110075   50022 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:19:59.111637   50022 out.go:177] * Verifying Kubernetes components...
	I1108 00:19:59.110208   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.110398   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.113108   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.113220   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:59.113743   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.113792   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.114471   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.114510   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.115179   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.117011   50022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:19:59.115897   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.116172   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.118325   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.118358   50022 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.118370   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:19:59.118383   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.118504   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.118696   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.118854   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.120889   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121255   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.121280   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121465   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.121647   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.121783   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.121868   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.135569   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I1108 00:19:59.135977   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.136428   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.136441   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.136799   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.137027   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.138503   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.138735   50022 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.138745   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:19:59.138758   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.141494   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.141870   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.141895   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.142046   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.142248   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.142370   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.142592   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.281321   50022 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.281572   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:19:59.284783   50022 node_ready.go:49] node "old-k8s-version-590541" has status "Ready":"True"
	I1108 00:19:59.284804   50022 node_ready.go:38] duration metric: took 3.444344ms waiting for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.284830   50022 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:59.290322   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:59.290908   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:19:59.290925   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:19:59.311485   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.346809   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.350361   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:19:59.350385   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:19:59.403305   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:59.403328   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:19:59.479823   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:20:00.224554   50022 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1108 00:20:00.659427   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.347903115s)
	I1108 00:20:00.659441   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.312604515s)
	I1108 00:20:00.659501   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659533   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659536   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659549   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659834   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.659857   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.659867   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659876   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659933   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.659981   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660022   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660051   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.660062   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.660131   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.660242   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660254   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660300   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660321   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.851614   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.851637   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.851930   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.851996   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.852027   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992341   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.5124613s)
	I1108 00:20:00.992412   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992429   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.992774   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.992811   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.992830   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992841   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992854   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.993100   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.993122   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.993162   50022 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:20:00.995051   50022 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:20:00.996839   50022 addons.go:502] enable addons completed in 1.932740124s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:20:01.324759   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:03.823744   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:06.322994   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:08.822755   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:10.823247   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:12.819017   50022 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819052   50022 pod_ready.go:81] duration metric: took 13.528699598s waiting for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	E1108 00:20:12.819067   50022 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819075   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825970   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.825988   50022 pod_ready.go:81] duration metric: took 6.906077ms waiting for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825996   50022 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830826   50022 pod_ready.go:92] pod "kube-proxy-p27g4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.830843   50022 pod_ready.go:81] duration metric: took 4.841517ms waiting for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830852   50022 pod_ready.go:38] duration metric: took 13.54601076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:20:12.830866   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:20:12.830909   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:20:12.849600   50022 api_server.go:72] duration metric: took 13.739491815s to wait for apiserver process to appear ...
	I1108 00:20:12.849634   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:20:12.849653   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:20:12.856740   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:20:12.857940   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:20:12.857960   50022 api_server.go:131] duration metric: took 8.319568ms to wait for apiserver health ...
	I1108 00:20:12.857967   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:20:12.862192   50022 system_pods.go:59] 4 kube-system pods found
	I1108 00:20:12.862217   50022 system_pods.go:61] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.862222   50022 system_pods.go:61] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.862230   50022 system_pods.go:61] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.862239   50022 system_pods.go:61] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.862248   50022 system_pods.go:74] duration metric: took 4.275078ms to wait for pod list to return data ...
	I1108 00:20:12.862257   50022 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:20:12.867018   50022 default_sa.go:45] found service account: "default"
	I1108 00:20:12.867043   50022 default_sa.go:55] duration metric: took 4.778337ms for default service account to be created ...
	I1108 00:20:12.867052   50022 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:20:12.871638   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:12.871664   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.871671   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.871682   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.871688   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.871706   50022 retry.go:31] will retry after 307.408821ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.184897   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.184927   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.184944   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.184954   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.184963   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.184984   50022 retry.go:31] will retry after 301.786347ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.492026   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.492053   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.492058   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.492065   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.492070   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.492085   50022 retry.go:31] will retry after 396.219719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.893320   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.893348   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.893356   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.893366   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.893372   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.893390   50022 retry.go:31] will retry after 592.540002ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:14.490613   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:14.490638   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:14.490644   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:14.490651   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:14.490655   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:14.490670   50022 retry.go:31] will retry after 512.19038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.008506   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.008533   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.008539   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.008545   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.008586   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.008606   50022 retry.go:31] will retry after 704.779032ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.719115   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.719140   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.719145   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.719152   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.719156   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.719174   50022 retry.go:31] will retry after 892.457504ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:16.616738   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:16.616764   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:16.616770   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:16.616776   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:16.616781   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:16.616795   50022 retry.go:31] will retry after 1.107800827s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:17.729962   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:17.729989   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:17.729997   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:17.730007   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:17.730014   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:17.730032   50022 retry.go:31] will retry after 1.24176205s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:18.976866   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:18.976891   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:18.976897   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:18.976905   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:18.976910   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:18.976925   50022 retry.go:31] will retry after 1.449825188s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:20.432723   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:20.432753   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:20.432760   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:20.432770   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:20.432776   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:20.432796   50022 retry.go:31] will retry after 1.764186569s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:22.202432   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:22.202465   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:22.202473   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:22.202484   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:22.202491   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:22.202522   50022 retry.go:31] will retry after 3.392893976s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:25.600685   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:25.600712   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:25.600717   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:25.600723   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:25.600728   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:25.600743   50022 retry.go:31] will retry after 3.537590817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:29.143439   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:29.143464   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:29.143468   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:29.143475   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:29.143482   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:29.143502   50022 retry.go:31] will retry after 3.82527374s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:32.973763   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:32.973796   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:32.973804   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:32.973814   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:32.973821   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:32.973840   50022 retry.go:31] will retry after 6.225201923s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:39.204648   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:39.204682   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:39.204690   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:39.204702   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:39.204710   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:39.204729   50022 retry.go:31] will retry after 7.177772259s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:46.388992   50022 system_pods.go:86] 5 kube-system pods found
	I1108 00:20:46.389016   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:46.389022   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Pending
	I1108 00:20:46.389025   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:46.389032   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:46.389037   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:46.389052   50022 retry.go:31] will retry after 8.995080935s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:55.391202   50022 system_pods.go:86] 7 kube-system pods found
	I1108 00:20:55.391228   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:55.391233   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:20:55.391237   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:20:55.391241   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:55.391245   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Pending
	I1108 00:20:55.391252   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:55.391256   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:55.391272   50022 retry.go:31] will retry after 10.028239262s: missing components: kube-controller-manager, kube-scheduler
	I1108 00:21:05.426292   50022 system_pods.go:86] 8 kube-system pods found
	I1108 00:21:05.426317   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:21:05.426323   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:21:05.426327   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:21:05.426331   50022 system_pods.go:89] "kube-controller-manager-old-k8s-version-590541" [90563d50-3d48-4256-ae70-82a2a6d1c251] Running
	I1108 00:21:05.426335   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:21:05.426339   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Running
	I1108 00:21:05.426345   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:21:05.426349   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:21:05.426356   50022 system_pods.go:126] duration metric: took 52.559298515s to wait for k8s-apps to be running ...
	I1108 00:21:05.426363   50022 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:21:05.426403   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:21:05.443281   50022 system_svc.go:56] duration metric: took 16.903571ms WaitForService to wait for kubelet.
	I1108 00:21:05.443315   50022 kubeadm.go:581] duration metric: took 1m6.333213694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:21:05.443337   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:21:05.447040   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:21:05.447064   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:21:05.447074   50022 node_conditions.go:105] duration metric: took 3.731788ms to run NodePressure ...
	I1108 00:21:05.447083   50022 start.go:228] waiting for startup goroutines ...
	I1108 00:21:05.447089   50022 start.go:233] waiting for cluster config update ...
	I1108 00:21:05.447098   50022 start.go:242] writing updated cluster config ...
	I1108 00:21:05.447409   50022 ssh_runner.go:195] Run: rm -f paused
	I1108 00:21:05.496203   50022 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1108 00:21:05.498233   50022 out.go:177] 
	W1108 00:21:05.499660   50022 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1108 00:21:05.500985   50022 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1108 00:21:05.502464   50022 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-590541" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-08 00:13:32 UTC, ends at Wed 2023-11-08 00:28:07 UTC. --
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.218005492Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562\"" file="storage/storage_transport.go:185"
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.218130959Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc\"" file="storage/storage_transport.go:185"
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.218493314Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"" file="storage/storage_transport.go:185"
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.218655661Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d],Size_:127165392,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707 registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d],Size_:123188534,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:6d1b4
fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725 registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374],Size_:61498678,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,RepoTags:[registry.k8s.io/kube-proxy:v1.28.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8 registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072],Size_:74691991,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c
88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15 registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3],Size_:295456551,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],Re
poDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,RepoTags:[docker.io/kindest/kindnetd:v20230809-80a64d96],RepoDigests:[docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4],Size_:65258016,Uid:nil,Username:,Spec:nil,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,U
id:nil,Username:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=2829bef1-0ba9-4334-ba99-f1415078ae70 name=/runtime.v1.ImageService/ListImages
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.234871232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6c245583-51f4-4487-8453-cb99c00ff462 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.234960017Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6c245583-51f4-4487-8453-cb99c00ff462 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.238845890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d57b90cb-b3d1-43eb-863d-9ec125f8038e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.239255275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403287239242340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d57b90cb-b3d1-43eb-863d-9ec125f8038e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.240171347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2db1f4ea-f5af-41cb-86ab-cd0cd300d13b name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.240269974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2db1f4ea-f5af-41cb-86ab-cd0cd300d13b name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.240516283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baa241fce7c43bab30bd0b77cd3079988292b3e06d253102ef620bdef914922,PodSandboxId:2e86de6acbdd982b5e175f4dd08f28c8b8decc5748c7f2d2d7dbd5a73648b647,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402743441274286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cace2ff-d7cd-4d31-9f11-d410bc675cbf,},Annotations:map[string]string{io.kubernetes.container.hash: 64da2d49,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d948d1c69c70129d55ba50eaf0b2a16b8e4028908ace6c6a852a93ffd3ca5,PodSandboxId:bc2ef5da14b350463f9dd7ed1fb741b709c54643cbd7ed430933d11b14672ca5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402742789255094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhdhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405b26b9-e6b3-440d-8f28-60db650079a8,},Annotations:map[string]string{io.kubernetes.container.hash: 66eccec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099e79a93f06647861e8ac86286ab0091d838e8e4c69779995ea7de641c854c3,PodSandboxId:0384e739371ad111505093202f0b03033263785760b05f37c0ee5964a654a203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402741615317724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tt9sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964a0552-9be0-4dbb-9a2f-0be3c93b8f83,},Annotations:map[string]string{io.kubernetes.container.hash: 2d6995fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3200ebf1e5c1d240f0d732419ba5107161506fa65d9572379bc6b978322da4,PodSandboxId:40a6f301c41c87f156c13dbbba5bb9903d60faa20966bb8cf515713e46b75e31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402717661773400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8386c5fbded7d9148
0b4ab5948c70416,},Annotations:map[string]string{io.kubernetes.container.hash: f0eaf05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9886e2d0bcb1f12980973b77af67452b7878638c5ff2d9ac0540bf4332f10392,PodSandboxId:0084df71fd8718c5b64b976397d055f8347073777c01d14022cb905a1d34775f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402717506641933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66be78a13c9085fed5
3443574bd068ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e89ecbf60951682cdc5f067fe7b5302ef77673247eeee26a25e9835f9bff4b,PodSandboxId:fb393347badea335742a064fcc564c65cb9eeefc13a09420d0479239f7572b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402717115241176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14855cec42a18ea4b2
c790ced4285e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 9183250f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f1d88317e3e89285ca556ec4ee523b694a605081d65d8f6e27d627099ab0fb,PodSandboxId:cd45b30c32332993a313c682436b1ec33c74b2f8706d5ff283ae8d27103f9bb8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402717044250777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f2addd0e9156fe002e814e1d06076f53,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2db1f4ea-f5af-41cb-86ab-cd0cd300d13b name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.294080485Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=211b4a8f-7e71-479a-b494-cdf441021b9c name=/runtime.v1.RuntimeService/Version
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.294193562Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=211b4a8f-7e71-479a-b494-cdf441021b9c name=/runtime.v1.RuntimeService/Version
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.295896714Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fef760be-8889-4d27-a5f9-c8975d771551 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.296326088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403287296312004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fef760be-8889-4d27-a5f9-c8975d771551 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.297469967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4edd0487-3ee4-41db-97aa-f58e1df2677a name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.297536965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4edd0487-3ee4-41db-97aa-f58e1df2677a name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.297699981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baa241fce7c43bab30bd0b77cd3079988292b3e06d253102ef620bdef914922,PodSandboxId:2e86de6acbdd982b5e175f4dd08f28c8b8decc5748c7f2d2d7dbd5a73648b647,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402743441274286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cace2ff-d7cd-4d31-9f11-d410bc675cbf,},Annotations:map[string]string{io.kubernetes.container.hash: 64da2d49,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d948d1c69c70129d55ba50eaf0b2a16b8e4028908ace6c6a852a93ffd3ca5,PodSandboxId:bc2ef5da14b350463f9dd7ed1fb741b709c54643cbd7ed430933d11b14672ca5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402742789255094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhdhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405b26b9-e6b3-440d-8f28-60db650079a8,},Annotations:map[string]string{io.kubernetes.container.hash: 66eccec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099e79a93f06647861e8ac86286ab0091d838e8e4c69779995ea7de641c854c3,PodSandboxId:0384e739371ad111505093202f0b03033263785760b05f37c0ee5964a654a203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402741615317724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tt9sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964a0552-9be0-4dbb-9a2f-0be3c93b8f83,},Annotations:map[string]string{io.kubernetes.container.hash: 2d6995fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3200ebf1e5c1d240f0d732419ba5107161506fa65d9572379bc6b978322da4,PodSandboxId:40a6f301c41c87f156c13dbbba5bb9903d60faa20966bb8cf515713e46b75e31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402717661773400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8386c5fbded7d9148
0b4ab5948c70416,},Annotations:map[string]string{io.kubernetes.container.hash: f0eaf05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9886e2d0bcb1f12980973b77af67452b7878638c5ff2d9ac0540bf4332f10392,PodSandboxId:0084df71fd8718c5b64b976397d055f8347073777c01d14022cb905a1d34775f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402717506641933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66be78a13c9085fed5
3443574bd068ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e89ecbf60951682cdc5f067fe7b5302ef77673247eeee26a25e9835f9bff4b,PodSandboxId:fb393347badea335742a064fcc564c65cb9eeefc13a09420d0479239f7572b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402717115241176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14855cec42a18ea4b2
c790ced4285e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 9183250f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f1d88317e3e89285ca556ec4ee523b694a605081d65d8f6e27d627099ab0fb,PodSandboxId:cd45b30c32332993a313c682436b1ec33c74b2f8706d5ff283ae8d27103f9bb8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402717044250777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f2addd0e9156fe002e814e1d06076f53,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4edd0487-3ee4-41db-97aa-f58e1df2677a name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.340203410Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e9d8cc06-fc57-441c-8603-8c41a92dbb21 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.340280990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e9d8cc06-fc57-441c-8603-8c41a92dbb21 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.341717585Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ab430176-29b7-4862-b737-2a471ceebb53 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.342105432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403287342091190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ab430176-29b7-4862-b737-2a471ceebb53 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.343276158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6178d805-389a-4ad8-a363-d7dd8853b842 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.343328139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6178d805-389a-4ad8-a363-d7dd8853b842 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:28:07 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:28:07.343584231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baa241fce7c43bab30bd0b77cd3079988292b3e06d253102ef620bdef914922,PodSandboxId:2e86de6acbdd982b5e175f4dd08f28c8b8decc5748c7f2d2d7dbd5a73648b647,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402743441274286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cace2ff-d7cd-4d31-9f11-d410bc675cbf,},Annotations:map[string]string{io.kubernetes.container.hash: 64da2d49,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d948d1c69c70129d55ba50eaf0b2a16b8e4028908ace6c6a852a93ffd3ca5,PodSandboxId:bc2ef5da14b350463f9dd7ed1fb741b709c54643cbd7ed430933d11b14672ca5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402742789255094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhdhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405b26b9-e6b3-440d-8f28-60db650079a8,},Annotations:map[string]string{io.kubernetes.container.hash: 66eccec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099e79a93f06647861e8ac86286ab0091d838e8e4c69779995ea7de641c854c3,PodSandboxId:0384e739371ad111505093202f0b03033263785760b05f37c0ee5964a654a203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402741615317724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tt9sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964a0552-9be0-4dbb-9a2f-0be3c93b8f83,},Annotations:map[string]string{io.kubernetes.container.hash: 2d6995fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3200ebf1e5c1d240f0d732419ba5107161506fa65d9572379bc6b978322da4,PodSandboxId:40a6f301c41c87f156c13dbbba5bb9903d60faa20966bb8cf515713e46b75e31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402717661773400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8386c5fbded7d9148
0b4ab5948c70416,},Annotations:map[string]string{io.kubernetes.container.hash: f0eaf05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9886e2d0bcb1f12980973b77af67452b7878638c5ff2d9ac0540bf4332f10392,PodSandboxId:0084df71fd8718c5b64b976397d055f8347073777c01d14022cb905a1d34775f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402717506641933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66be78a13c9085fed5
3443574bd068ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e89ecbf60951682cdc5f067fe7b5302ef77673247eeee26a25e9835f9bff4b,PodSandboxId:fb393347badea335742a064fcc564c65cb9eeefc13a09420d0479239f7572b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402717115241176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14855cec42a18ea4b2
c790ced4285e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 9183250f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f1d88317e3e89285ca556ec4ee523b694a605081d65d8f6e27d627099ab0fb,PodSandboxId:cd45b30c32332993a313c682436b1ec33c74b2f8706d5ff283ae8d27103f9bb8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402717044250777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f2addd0e9156fe002e814e1d06076f53,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6178d805-389a-4ad8-a363-d7dd8853b842 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3baa241fce7c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2e86de6acbdd9       storage-provisioner
	553d948d1c69c       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   9 minutes ago       Running             kube-proxy                0                   bc2ef5da14b35       kube-proxy-rhdhg
	099e79a93f066       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   0384e739371ad       coredns-5dd5756b68-tt9sm
	dc3200ebf1e5c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   40a6f301c41c8       etcd-default-k8s-diff-port-039263
	9886e2d0bcb1f       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   9 minutes ago       Running             kube-scheduler            2                   0084df71fd871       kube-scheduler-default-k8s-diff-port-039263
	82e89ecbf6095       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   9 minutes ago       Running             kube-apiserver            2                   fb393347badea       kube-apiserver-default-k8s-diff-port-039263
	18f1d88317e3e       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   9 minutes ago       Running             kube-controller-manager   2                   cd45b30c32332       kube-controller-manager-default-k8s-diff-port-039263
	
	* 
	* ==> coredns [099e79a93f06647861e8ac86286ab0091d838e8e4c69779995ea7de641c854c3] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-039263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-039263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=default-k8s-diff-port-039263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T00_18_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 00:18:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-039263
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 00:28:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:24:11 +0000   Wed, 08 Nov 2023 00:18:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:24:11 +0000   Wed, 08 Nov 2023 00:18:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:24:11 +0000   Wed, 08 Nov 2023 00:18:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:24:11 +0000   Wed, 08 Nov 2023 00:18:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.116
	  Hostname:    default-k8s-diff-port-039263
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae2a601a9aa5456da7ba4055df3e6884
	  System UUID:                ae2a601a-9aa5-456d-a7ba-4055df3e6884
	  Boot ID:                    4cd87de2-2e03-4df0-ad9e-9645a9503d64
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-tt9sm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-default-k8s-diff-port-039263                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-039263             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-039263    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                 kube-proxy-rhdhg                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-default-k8s-diff-port-039263             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-57f55c9bc5-j6t7g                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m22s                  kubelet          Node default-k8s-diff-port-039263 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m22s                  kubelet          Node default-k8s-diff-port-039263 status is now: NodeReady
	  Normal  RegisteredNode           9m10s                  node-controller  Node default-k8s-diff-port-039263 event: Registered Node default-k8s-diff-port-039263 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 8 00:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067766] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.576208] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.525977] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152299] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.494811] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.394285] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.125489] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.171000] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.125389] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.268178] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Nov 8 00:14] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[ +19.570464] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 8 00:18] systemd-fstab-generator[3512]: Ignoring "noauto" for root device
	[ +10.307595] systemd-fstab-generator[3836]: Ignoring "noauto" for root device
	[ +13.835443] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [dc3200ebf1e5c1d240f0d732419ba5107161506fa65d9572379bc6b978322da4] <==
	* {"level":"info","ts":"2023-11-08T00:18:39.262583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86db9aa99badf4aa switched to configuration voters=(9717530674233996458)"}
	{"level":"info","ts":"2023-11-08T00:18:39.262734Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"64fcf4fe45fcdc82","local-member-id":"86db9aa99badf4aa","added-peer-id":"86db9aa99badf4aa","added-peer-peer-urls":["https://192.168.72.116:2380"]}
	{"level":"info","ts":"2023-11-08T00:18:39.277642Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-08T00:18:39.277863Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.116:2380"}
	{"level":"info","ts":"2023-11-08T00:18:39.277998Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.116:2380"}
	{"level":"info","ts":"2023-11-08T00:18:39.28319Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-08T00:18:39.283119Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"86db9aa99badf4aa","initial-advertise-peer-urls":["https://192.168.72.116:2380"],"listen-peer-urls":["https://192.168.72.116:2380"],"advertise-client-urls":["https://192.168.72.116:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.116:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-08T00:18:39.610449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86db9aa99badf4aa is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-08T00:18:39.610514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86db9aa99badf4aa became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-08T00:18:39.610532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86db9aa99badf4aa received MsgPreVoteResp from 86db9aa99badf4aa at term 1"}
	{"level":"info","ts":"2023-11-08T00:18:39.610543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86db9aa99badf4aa became candidate at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:39.61055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86db9aa99badf4aa received MsgVoteResp from 86db9aa99badf4aa at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:39.610563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86db9aa99badf4aa became leader at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:39.61057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86db9aa99badf4aa elected leader 86db9aa99badf4aa at term 2"}
	{"level":"info","ts":"2023-11-08T00:18:39.614628Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"86db9aa99badf4aa","local-member-attributes":"{Name:default-k8s-diff-port-039263 ClientURLs:[https://192.168.72.116:2379]}","request-path":"/0/members/86db9aa99badf4aa/attributes","cluster-id":"64fcf4fe45fcdc82","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T00:18:39.614688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:39.61544Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:39.616493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.116:2379"}
	{"level":"info","ts":"2023-11-08T00:18:39.616695Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:39.617848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T00:18:39.618431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:39.618455Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:39.619766Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"64fcf4fe45fcdc82","local-member-id":"86db9aa99badf4aa","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:39.619884Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:39.619905Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  00:28:07 up 14 min,  0 users,  load average: 0.17, 0.25, 0.20
	Linux default-k8s-diff-port-039263 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [82e89ecbf60951682cdc5f067fe7b5302ef77673247eeee26a25e9835f9bff4b] <==
	* W1108 00:23:42.714623       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:23:42.714699       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:23:42.714716       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:23:42.715052       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:23:42.715144       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:23:42.715936       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:24:41.590927       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:24:42.715877       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:24:42.715954       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:24:42.715967       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:24:42.717289       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:24:42.717435       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:24:42.717495       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:25:41.590560       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1108 00:26:41.590067       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:26:42.716494       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:26:42.716594       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:26:42.716612       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:26:42.717891       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:26:42.718038       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:26:42.718094       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:27:41.590078       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [18f1d88317e3e89285ca556ec4ee523b694a605081d65d8f6e27d627099ab0fb] <==
	* I1108 00:22:28.415579       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:22:57.933123       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:22:58.425105       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:23:27.939606       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:23:28.434240       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:23:57.946198       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:23:58.442877       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:24:27.952451       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:24:28.452510       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:24:53.713135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="454.699µs"
	E1108 00:24:57.959906       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:24:58.463266       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:25:04.709733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="95.648µs"
	E1108 00:25:27.966626       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:25:28.472751       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:25:57.972489       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:25:58.481450       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:26:27.979593       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:26:28.490498       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:26:57.985404       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:26:58.499660       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:27:27.993589       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:27:28.515654       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:27:57.999477       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:27:58.524141       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [553d948d1c69c70129d55ba50eaf0b2a16b8e4028908ace6c6a852a93ffd3ca5] <==
	* I1108 00:19:03.586848       1 server_others.go:69] "Using iptables proxy"
	I1108 00:19:03.622479       1 node.go:141] Successfully retrieved node IP: 192.168.72.116
	I1108 00:19:03.706581       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 00:19:03.706736       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 00:19:03.711901       1 server_others.go:152] "Using iptables Proxier"
	I1108 00:19:03.712880       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 00:19:03.714199       1 server.go:846] "Version info" version="v1.28.3"
	I1108 00:19:03.714242       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:19:03.716294       1 config.go:188] "Starting service config controller"
	I1108 00:19:03.716762       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 00:19:03.716834       1 config.go:97] "Starting endpoint slice config controller"
	I1108 00:19:03.716861       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 00:19:03.718523       1 config.go:315] "Starting node config controller"
	I1108 00:19:03.718664       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 00:19:03.817857       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 00:19:03.817894       1 shared_informer.go:318] Caches are synced for service config
	I1108 00:19:03.819282       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [9886e2d0bcb1f12980973b77af67452b7878638c5ff2d9ac0540bf4332f10392] <==
	* W1108 00:18:42.590731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:18:42.590804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 00:18:42.644880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:42.645001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:42.783201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 00:18:42.783273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1108 00:18:42.836710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:42.836784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:42.900654       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:42.900732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:43.020963       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:43.021078       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 00:18:43.041263       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 00:18:43.041398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 00:18:43.078744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:18:43.078814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 00:18:43.078911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:18:43.078927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 00:18:43.104153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 00:18:43.104274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 00:18:43.155843       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:43.155896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:43.160232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 00:18:43.160279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1108 00:18:46.130450       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 00:13:32 UTC, ends at Wed 2023-11-08 00:28:07 UTC. --
	Nov 08 00:25:27 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:25:27.692684    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:25:42 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:25:42.690519    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:25:45 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:25:45.844023    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:25:45 default-k8s-diff-port-039263 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:25:45 default-k8s-diff-port-039263 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:25:45 default-k8s-diff-port-039263 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:25:54 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:25:54.690990    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:26:05 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:26:05.691283    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:26:20 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:26:20.691714    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:26:33 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:26:33.692119    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:26:45 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:26:45.843141    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:26:45 default-k8s-diff-port-039263 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:26:45 default-k8s-diff-port-039263 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:26:45 default-k8s-diff-port-039263 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:26:48 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:26:48.691288    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:27:00 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:27:00.691149    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:27:15 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:27:15.691679    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:27:27 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:27:27.690956    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:27:38 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:27:38.691136    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:27:45 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:27:45.843716    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:27:45 default-k8s-diff-port-039263 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:27:45 default-k8s-diff-port-039263 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:27:45 default-k8s-diff-port-039263 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:27:51 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:27:51.692302    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:28:03 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:28:03.691686    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	
	* 
	* ==> storage-provisioner [3baa241fce7c43bab30bd0b77cd3079988292b3e06d253102ef620bdef914922] <==
	* I1108 00:19:03.603266       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 00:19:03.623479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 00:19:03.623656       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 00:19:03.640018       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 00:19:03.640175       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-039263_ce9c89a2-842e-4265-aad4-e729b6e29abf!
	I1108 00:19:03.641282       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f8aca3c7-5434-4066-adcb-dd1d0fd2b186", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-039263_ce9c89a2-842e-4265-aad4-e729b6e29abf became leader
	I1108 00:19:03.740836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-039263_ce9c89a2-842e-4265-aad4-e729b6e29abf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-039263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-j6t7g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-039263 describe pod metrics-server-57f55c9bc5-j6t7g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-039263 describe pod metrics-server-57f55c9bc5-j6t7g: exit status 1 (68.794802ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-j6t7g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-039263 describe pod metrics-server-57f55c9bc5-j6t7g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1108 00:22:02.004903   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1108 00:23:53.871917   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1108 00:25:38.956187   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1108 00:25:42.434060   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1108 00:27:05.486028   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-590541 -n old-k8s-version-590541
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-08 00:30:06.101703671 +0000 UTC m=+5340.265012556
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-590541 -n old-k8s-version-590541
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-590541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-590541 logs -n 25: (1.600524961s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-161055                           | kubernetes-upgrade-161055    | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:04 UTC |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:05 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-590541        | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-320390             | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-253253            | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-560216 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	|         | disable-driver-mounts-560216                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:09 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-590541             | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320390                  | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-253253                 | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-039263  | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-039263       | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:12 UTC | 08 Nov 23 00:19 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:12:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:12:00.921478   51228 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:12:00.921584   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921592   51228 out.go:309] Setting ErrFile to fd 2...
	I1108 00:12:00.921597   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921752   51228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:12:00.922282   51228 out.go:303] Setting JSON to false
	I1108 00:12:00.923151   51228 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6870,"bootTime":1699395451,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:12:00.923210   51228 start.go:138] virtualization: kvm guest
	I1108 00:12:00.925322   51228 out.go:177] * [default-k8s-diff-port-039263] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:12:00.926718   51228 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:12:00.928030   51228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:12:00.926756   51228 notify.go:220] Checking for updates...
	I1108 00:12:00.930659   51228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:12:00.932049   51228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:12:00.933341   51228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:12:00.934394   51228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:12:00.936334   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:00.936806   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.936857   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.950893   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I1108 00:12:00.951284   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.951775   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.951796   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.952131   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.952308   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:00.952537   51228 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:12:00.952850   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.952894   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.966402   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I1108 00:12:00.966726   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.967218   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.967238   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.967525   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.967705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:01.002079   51228 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:12:01.003352   51228 start.go:298] selected driver: kvm2
	I1108 00:12:01.003362   51228 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:def
ault-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.003471   51228 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:12:01.004117   51228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.004197   51228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:12:01.018635   51228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:12:01.018987   51228 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 00:12:01.019047   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:12:01.019060   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:12:01.019072   51228 start_flags.go:323] config:
	{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.019251   51228 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.021306   51228 out.go:177] * Starting control plane node default-k8s-diff-port-039263 in cluster default-k8s-diff-port-039263
	I1108 00:12:00.865093   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:03.937104   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:01.022723   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:12:01.022765   51228 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1108 00:12:01.022777   51228 cache.go:56] Caching tarball of preloaded images
	I1108 00:12:01.022864   51228 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 00:12:01.022875   51228 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1108 00:12:01.022984   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:12:01.023164   51228 start.go:365] acquiring machines lock for default-k8s-diff-port-039263: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:10.017091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:13.089091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:19.169065   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:22.241084   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:28.321050   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:31.393060   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:37.473056   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:40.475708   50505 start.go:369] acquired machines lock for "no-preload-320390" in 3m26.103068871s
	I1108 00:12:40.475773   50505 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:40.475781   50505 fix.go:54] fixHost starting: 
	I1108 00:12:40.476087   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:40.476116   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:40.490309   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45419
	I1108 00:12:40.490708   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:40.491196   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:12:40.491217   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:40.491530   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:40.491718   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:40.491870   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:12:40.493597   50505 fix.go:102] recreateIfNeeded on no-preload-320390: state=Stopped err=<nil>
	I1108 00:12:40.493628   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	W1108 00:12:40.493762   50505 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:40.495670   50505 out.go:177] * Restarting existing kvm2 VM for "no-preload-320390" ...
	I1108 00:12:40.496930   50505 main.go:141] libmachine: (no-preload-320390) Calling .Start
	I1108 00:12:40.497098   50505 main.go:141] libmachine: (no-preload-320390) Ensuring networks are active...
	I1108 00:12:40.497753   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network default is active
	I1108 00:12:40.498094   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network mk-no-preload-320390 is active
	I1108 00:12:40.498442   50505 main.go:141] libmachine: (no-preload-320390) Getting domain xml...
	I1108 00:12:40.499199   50505 main.go:141] libmachine: (no-preload-320390) Creating domain...
	I1108 00:12:41.718179   50505 main.go:141] libmachine: (no-preload-320390) Waiting to get IP...
	I1108 00:12:41.719024   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.719423   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.719497   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.719407   51373 retry.go:31] will retry after 204.819851ms: waiting for machine to come up
	I1108 00:12:41.925924   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.926414   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.926445   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.926361   51373 retry.go:31] will retry after 237.59613ms: waiting for machine to come up
	I1108 00:12:42.165848   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.166251   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.166282   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.166195   51373 retry.go:31] will retry after 306.914093ms: waiting for machine to come up
	I1108 00:12:42.474651   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.475026   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.475057   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.474981   51373 retry.go:31] will retry after 490.427385ms: waiting for machine to come up
	I1108 00:12:42.967292   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.967709   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.967733   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.967661   51373 retry.go:31] will retry after 684.227655ms: waiting for machine to come up
	I1108 00:12:43.653384   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:43.653823   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:43.653847   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:43.653774   51373 retry.go:31] will retry after 640.101868ms: waiting for machine to come up
	I1108 00:12:40.473798   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:40.473838   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:12:40.475605   50022 machine.go:91] provisioned docker machine in 4m37.566672036s
	I1108 00:12:40.475639   50022 fix.go:56] fixHost completed within 4m37.589859084s
	I1108 00:12:40.475644   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 4m37.589890946s
	W1108 00:12:40.475670   50022 start.go:691] error starting host: provision: host is not running
	W1108 00:12:40.475777   50022 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1108 00:12:40.475788   50022 start.go:706] Will try again in 5 seconds ...
	I1108 00:12:44.295060   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:44.295559   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:44.295610   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:44.295506   51373 retry.go:31] will retry after 797.709386ms: waiting for machine to come up
	I1108 00:12:45.095135   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:45.095552   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:45.095575   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:45.095476   51373 retry.go:31] will retry after 1.052157242s: waiting for machine to come up
	I1108 00:12:46.149040   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:46.149393   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:46.149426   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:46.149336   51373 retry.go:31] will retry after 1.246701556s: waiting for machine to come up
	I1108 00:12:47.397579   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:47.397942   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:47.397981   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:47.397900   51373 retry.go:31] will retry after 1.742754262s: waiting for machine to come up
	I1108 00:12:49.142995   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:49.143390   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:49.143419   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:49.143349   51373 retry.go:31] will retry after 2.412997156s: waiting for machine to come up
	I1108 00:12:45.476072   50022 start.go:365] acquiring machines lock for old-k8s-version-590541: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:51.558471   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:51.558857   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:51.558880   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:51.558809   51373 retry.go:31] will retry after 3.169873944s: waiting for machine to come up
	I1108 00:12:54.732010   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:54.732320   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:54.732340   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:54.732292   51373 retry.go:31] will retry after 3.452837487s: waiting for machine to come up
	I1108 00:12:58.188516   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.188983   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has current primary IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.189014   50505 main.go:141] libmachine: (no-preload-320390) Found IP for machine: 192.168.61.176
	I1108 00:12:58.189036   50505 main.go:141] libmachine: (no-preload-320390) Reserving static IP address...
	I1108 00:12:58.189332   50505 main.go:141] libmachine: (no-preload-320390) Reserved static IP address: 192.168.61.176
	I1108 00:12:58.189364   50505 main.go:141] libmachine: (no-preload-320390) Waiting for SSH to be available...
	I1108 00:12:58.189388   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.189415   50505 main.go:141] libmachine: (no-preload-320390) DBG | skip adding static IP to network mk-no-preload-320390 - found existing host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"}
	I1108 00:12:58.189432   50505 main.go:141] libmachine: (no-preload-320390) DBG | Getting to WaitForSSH function...
	I1108 00:12:58.191264   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191565   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.191598   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191730   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH client type: external
	I1108 00:12:58.191760   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa (-rw-------)
	I1108 00:12:58.191794   50505 main.go:141] libmachine: (no-preload-320390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:12:58.191808   50505 main.go:141] libmachine: (no-preload-320390) DBG | About to run SSH command:
	I1108 00:12:58.191819   50505 main.go:141] libmachine: (no-preload-320390) DBG | exit 0
	I1108 00:12:58.284621   50505 main.go:141] libmachine: (no-preload-320390) DBG | SSH cmd err, output: <nil>: 
	I1108 00:12:58.284983   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetConfigRaw
	I1108 00:12:58.285600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.287966   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288289   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.288325   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288532   50505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/config.json ...
	I1108 00:12:58.288712   50505 machine.go:88] provisioning docker machine ...
	I1108 00:12:58.288732   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:58.288917   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289074   50505 buildroot.go:166] provisioning hostname "no-preload-320390"
	I1108 00:12:58.289097   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289217   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.291053   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291329   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.291358   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291460   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.291613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291749   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291849   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.292009   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.292394   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.292419   50505 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320390 && echo "no-preload-320390" | sudo tee /etc/hostname
	I1108 00:12:58.433310   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320390
	
	I1108 00:12:58.433333   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.435959   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436351   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.436383   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436531   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.436710   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436853   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436959   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.437088   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.437607   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.437633   50505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320390/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:12:58.578473   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:58.578506   50505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:12:58.578568   50505 buildroot.go:174] setting up certificates
	I1108 00:12:58.578582   50505 provision.go:83] configureAuth start
	I1108 00:12:58.578600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.578889   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.581534   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581857   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.581881   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581948   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.583777   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584002   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.584023   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584121   50505 provision.go:138] copyHostCerts
	I1108 00:12:58.584172   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:12:58.584184   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:12:58.584247   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:12:58.584327   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:12:58.584337   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:12:58.584359   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:12:58.584407   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:12:58.584415   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:12:58.584434   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:12:58.584473   50505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-320390 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-320390]
	I1108 00:12:58.785035   50505 provision.go:172] copyRemoteCerts
	I1108 00:12:58.785095   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:12:58.785127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.787683   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788001   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.788037   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788194   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.788363   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.788534   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.788678   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:58.881791   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:12:58.905314   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:12:58.928183   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:12:58.951053   50505 provision.go:86] duration metric: configureAuth took 372.456375ms
	I1108 00:12:58.951079   50505 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:12:58.951288   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:58.951368   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.953851   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954158   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.954182   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954309   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.954504   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954689   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.954964   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.955269   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.955283   50505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:12:59.265311   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:12:59.265342   50505 machine.go:91] provisioned docker machine in 976.618103ms
	I1108 00:12:59.265353   50505 start.go:300] post-start starting for "no-preload-320390" (driver="kvm2")
	I1108 00:12:59.265362   50505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:12:59.265377   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.265683   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:12:59.265721   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.533994   50613 start.go:369] acquired machines lock for "embed-certs-253253" in 3m37.489465904s
	I1108 00:12:59.534047   50613 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:59.534093   50613 fix.go:54] fixHost starting: 
	I1108 00:12:59.534485   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:59.534531   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:59.553784   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I1108 00:12:59.554193   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:59.554676   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:12:59.554702   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:59.555006   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:59.555188   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:12:59.555320   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:12:59.556783   50613 fix.go:102] recreateIfNeeded on embed-certs-253253: state=Stopped err=<nil>
	I1108 00:12:59.556804   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	W1108 00:12:59.556989   50613 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:59.558834   50613 out.go:177] * Restarting existing kvm2 VM for "embed-certs-253253" ...
	I1108 00:12:59.268378   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268792   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.268836   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268991   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.269175   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.269337   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.269480   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.363687   50505 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:12:59.368009   50505 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:12:59.368028   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:12:59.368087   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:12:59.368176   50505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:12:59.368287   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:12:59.377685   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:12:59.399143   50505 start.go:303] post-start completed in 133.780055ms
	I1108 00:12:59.399161   50505 fix.go:56] fixHost completed within 18.923380073s
	I1108 00:12:59.399178   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.401608   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.401977   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.402007   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.402127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.402315   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402471   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402650   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.402824   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:59.403150   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:59.403162   50505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:12:59.533831   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402379.481958632
	
	I1108 00:12:59.533852   50505 fix.go:206] guest clock: 1699402379.481958632
	I1108 00:12:59.533859   50505 fix.go:219] Guest: 2023-11-08 00:12:59.481958632 +0000 UTC Remote: 2023-11-08 00:12:59.399164235 +0000 UTC m=+225.183083525 (delta=82.794397ms)
	I1108 00:12:59.533876   50505 fix.go:190] guest clock delta is within tolerance: 82.794397ms
	I1108 00:12:59.533880   50505 start.go:83] releasing machines lock for "no-preload-320390", held for 19.058127295s
	I1108 00:12:59.533902   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.534171   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:59.537173   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537616   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.537665   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537736   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538230   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538431   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538517   50505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:12:59.538613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.538659   50505 ssh_runner.go:195] Run: cat /version.json
	I1108 00:12:59.538683   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.541051   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541283   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541438   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541463   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541599   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541608   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541634   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541775   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.541845   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541939   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.541997   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.542078   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.542093   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.542265   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.637947   50505 ssh_runner.go:195] Run: systemctl --version
	I1108 00:12:59.660255   50505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:12:59.809407   50505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:12:59.816246   50505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:12:59.816323   50505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:12:59.831564   50505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:12:59.831586   50505 start.go:472] detecting cgroup driver to use...
	I1108 00:12:59.831651   50505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:12:59.847556   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:12:59.861077   50505 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:12:59.861143   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:12:59.876764   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:12:59.890894   50505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:00.001947   50505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:00.121923   50505 docker.go:219] disabling docker service ...
	I1108 00:13:00.122000   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:00.135525   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:00.148130   50505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:00.259318   50505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:00.368101   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:00.381138   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:00.398173   50505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:00.398245   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.407655   50505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:00.407699   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.416919   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.425767   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.434447   50505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:00.443679   50505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:00.451581   50505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:00.451619   50505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:00.464498   50505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:00.474332   50505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:00.599521   50505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:00.770248   50505 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:00.770341   50505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:00.775707   50505 start.go:540] Will wait 60s for crictl version
	I1108 00:13:00.775768   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:00.779578   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:00.821230   50505 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:00.821320   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.872851   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.920420   50505 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:12:59.560111   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Start
	I1108 00:12:59.560287   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring networks are active...
	I1108 00:12:59.561030   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network default is active
	I1108 00:12:59.561390   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network mk-embed-certs-253253 is active
	I1108 00:12:59.561717   50613 main.go:141] libmachine: (embed-certs-253253) Getting domain xml...
	I1108 00:12:59.562287   50613 main.go:141] libmachine: (embed-certs-253253) Creating domain...
	I1108 00:13:00.806061   50613 main.go:141] libmachine: (embed-certs-253253) Waiting to get IP...
	I1108 00:13:00.806862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:00.807268   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:00.807340   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:00.807226   51493 retry.go:31] will retry after 261.179966ms: waiting for machine to come up
	I1108 00:13:01.069535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.070048   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.070078   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.069997   51493 retry.go:31] will retry after 302.795302ms: waiting for machine to come up
	I1108 00:13:01.374567   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.375094   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.375119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.375043   51493 retry.go:31] will retry after 303.804523ms: waiting for machine to come up
	I1108 00:13:01.680374   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.680698   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.680726   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.680660   51493 retry.go:31] will retry after 446.122126ms: waiting for machine to come up
	I1108 00:13:00.921979   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:13:00.924760   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925121   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:13:00.925148   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925370   50505 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:00.929750   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:00.941338   50505 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:00.941372   50505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:00.979343   50505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:00.979370   50505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:13:00.979489   50505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.979539   50505 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.979636   50505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:00.979477   50505 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.979515   50505 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.979516   50505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980609   50505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.980677   50505 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.980704   50505 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.980733   50505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.980949   50505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980994   50505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.126154   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.131334   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.141929   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.150051   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.178472   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.198519   50505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1108 00:13:01.198569   50505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.198628   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.214419   50505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1108 00:13:01.214470   50505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.214527   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249270   50505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1108 00:13:01.249316   50505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.249321   50505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1108 00:13:01.249354   50505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.249363   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249398   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.257971   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1108 00:13:01.268557   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.279207   50505 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1108 00:13:01.279254   50505 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.279255   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.279295   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.279304   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.279365   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.279492   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.477649   50505 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1108 00:13:01.477691   50505 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.477740   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.477782   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1108 00:13:01.477963   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1108 00:13:01.478038   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1108 00:13:01.478005   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.478079   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.478116   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:01.478121   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:01.489810   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.490983   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1108 00:13:01.491011   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.491049   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.490984   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1108 00:13:01.556911   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1108 00:13:01.556996   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1108 00:13:01.557036   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:01.557048   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1108 00:13:01.576123   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1108 00:13:01.576251   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:02.001052   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:02.127888   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.128302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.128333   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.128247   51493 retry.go:31] will retry after 498.0349ms: waiting for machine to come up
	I1108 00:13:02.627872   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.628339   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.628373   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.628296   51493 retry.go:31] will retry after 852.947554ms: waiting for machine to come up
	I1108 00:13:03.483507   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:03.484074   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:03.484119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:03.484024   51493 retry.go:31] will retry after 1.040831469s: waiting for machine to come up
	I1108 00:13:04.526186   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:04.526503   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:04.526535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:04.526446   51493 retry.go:31] will retry after 960.701342ms: waiting for machine to come up
	I1108 00:13:05.489041   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:05.489473   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:05.489509   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:05.489456   51493 retry.go:31] will retry after 1.729813733s: waiting for machine to come up
	I1108 00:13:04.536381   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.045307892s)
	I1108 00:13:04.536412   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1108 00:13:04.536439   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536453   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (2.979392017s)
	I1108 00:13:04.536485   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1108 00:13:04.536491   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536531   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (2.960264305s)
	I1108 00:13:04.536549   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1108 00:13:04.536590   50505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.535505624s)
	I1108 00:13:04.536622   50505 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1108 00:13:04.536652   50505 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:04.536694   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:07.220832   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.68430655s)
	I1108 00:13:07.220863   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1108 00:13:07.220898   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.220902   50505 ssh_runner.go:235] Completed: which crictl: (2.684187653s)
	I1108 00:13:07.220982   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.221015   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:08.593275   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.372272111s)
	I1108 00:13:08.593311   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1108 00:13:08.593326   50505 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.372286228s)
	I1108 00:13:08.593374   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 00:13:08.593338   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:08.593474   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:08.593479   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:07.221541   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:07.221969   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:07.221998   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:07.221953   51493 retry.go:31] will retry after 1.97898588s: waiting for machine to come up
	I1108 00:13:09.202332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:09.202803   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:09.202831   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:09.202756   51493 retry.go:31] will retry after 2.565503631s: waiting for machine to come up
	I1108 00:13:11.769857   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:11.770332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:11.770354   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:11.770292   51493 retry.go:31] will retry after 3.236419831s: waiting for machine to come up
	I1108 00:13:10.382696   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.789194848s)
	I1108 00:13:10.382726   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1108 00:13:10.382747   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.789249445s)
	I1108 00:13:10.382776   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1108 00:13:10.382752   50505 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:10.382828   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:11.846184   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.463326325s)
	I1108 00:13:11.846222   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1108 00:13:11.846254   50505 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:11.846322   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:15.008441   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:15.008899   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:15.008936   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:15.008860   51493 retry.go:31] will retry after 3.079379099s: waiting for machine to come up
	I1108 00:13:19.138865   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.292505697s)
	I1108 00:13:19.138899   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1108 00:13:19.138926   50505 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.138987   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.465800   51228 start.go:369] acquired machines lock for "default-k8s-diff-port-039263" in 1m18.442604828s
	I1108 00:13:19.465853   51228 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:19.465863   51228 fix.go:54] fixHost starting: 
	I1108 00:13:19.466321   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:19.466373   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:19.485614   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I1108 00:13:19.486012   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:19.486457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:13:19.486478   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:19.486839   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:19.487016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:19.487158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:13:19.488697   51228 fix.go:102] recreateIfNeeded on default-k8s-diff-port-039263: state=Stopped err=<nil>
	I1108 00:13:19.488733   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	W1108 00:13:19.488889   51228 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:19.490913   51228 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-039263" ...
	I1108 00:13:19.492333   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Start
	I1108 00:13:19.492481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring networks are active...
	I1108 00:13:19.493162   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network default is active
	I1108 00:13:19.493592   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network mk-default-k8s-diff-port-039263 is active
	I1108 00:13:19.494016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Getting domain xml...
	I1108 00:13:19.494668   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Creating domain...
	I1108 00:13:20.910918   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting to get IP...
	I1108 00:13:20.911948   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912423   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912517   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:20.912403   51635 retry.go:31] will retry after 265.914494ms: waiting for machine to come up
	I1108 00:13:18.092086   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092516   50613 main.go:141] libmachine: (embed-certs-253253) Found IP for machine: 192.168.39.159
	I1108 00:13:18.092544   50613 main.go:141] libmachine: (embed-certs-253253) Reserving static IP address...
	I1108 00:13:18.092568   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has current primary IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092947   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.092980   50613 main.go:141] libmachine: (embed-certs-253253) DBG | skip adding static IP to network mk-embed-certs-253253 - found existing host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"}
	I1108 00:13:18.092999   50613 main.go:141] libmachine: (embed-certs-253253) Reserved static IP address: 192.168.39.159
	I1108 00:13:18.093019   50613 main.go:141] libmachine: (embed-certs-253253) Waiting for SSH to be available...
	I1108 00:13:18.093036   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Getting to WaitForSSH function...
	I1108 00:13:18.094941   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.095311   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095472   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH client type: external
	I1108 00:13:18.095487   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa (-rw-------)
	I1108 00:13:18.095519   50613 main.go:141] libmachine: (embed-certs-253253) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:18.095535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | About to run SSH command:
	I1108 00:13:18.095545   50613 main.go:141] libmachine: (embed-certs-253253) DBG | exit 0
	I1108 00:13:18.184364   50613 main.go:141] libmachine: (embed-certs-253253) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:18.184700   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetConfigRaw
	I1108 00:13:18.264914   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.267404   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267716   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.267752   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267951   50613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/config.json ...
	I1108 00:13:18.268153   50613 machine.go:88] provisioning docker machine ...
	I1108 00:13:18.268171   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:18.268382   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268642   50613 buildroot.go:166] provisioning hostname "embed-certs-253253"
	I1108 00:13:18.268662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268783   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.270977   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271275   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.271302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271485   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.271683   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.271873   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.272021   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.272185   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.272549   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.272564   50613 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-253253 && echo "embed-certs-253253" | sudo tee /etc/hostname
	I1108 00:13:18.408618   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253253
	
	I1108 00:13:18.408655   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.411325   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411629   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.411673   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411793   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.412024   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412204   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412353   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.412513   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.412864   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.412884   50613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-253253' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-253253/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-253253' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:18.537585   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:18.537611   50613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:18.537628   50613 buildroot.go:174] setting up certificates
	I1108 00:13:18.537636   50613 provision.go:83] configureAuth start
	I1108 00:13:18.537644   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.537930   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.540544   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.540937   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.540966   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.541078   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.543184   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543455   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.543486   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543559   50613 provision.go:138] copyHostCerts
	I1108 00:13:18.543621   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:18.543639   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:18.543692   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:18.543793   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:18.543801   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:18.543823   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:18.543876   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:18.543884   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:18.543900   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:18.543962   50613 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-253253 san=[192.168.39.159 192.168.39.159 localhost 127.0.0.1 minikube embed-certs-253253]
	I1108 00:13:18.707824   50613 provision.go:172] copyRemoteCerts
	I1108 00:13:18.707880   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:18.707905   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.710820   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711181   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.711208   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.711642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.711815   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.711973   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:18.803200   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:18.827267   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:13:18.850710   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:18.876752   50613 provision.go:86] duration metric: configureAuth took 339.103407ms
	I1108 00:13:18.876781   50613 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:18.876987   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:18.877075   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.879751   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880121   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.880149   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880331   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.880501   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880649   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880772   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.880929   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.881240   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.881257   50613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:19.199987   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:19.200012   50613 machine.go:91] provisioned docker machine in 931.846262ms
	I1108 00:13:19.200023   50613 start.go:300] post-start starting for "embed-certs-253253" (driver="kvm2")
	I1108 00:13:19.200035   50613 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:19.200057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.200377   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:19.200409   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.203230   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203610   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.203644   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203771   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.203963   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.204118   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.204231   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.297991   50613 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:19.303630   50613 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:19.303655   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:19.303721   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:19.303831   50613 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:19.303956   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:19.315605   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:19.339647   50613 start.go:303] post-start completed in 139.611237ms
	I1108 00:13:19.339665   50613 fix.go:56] fixHost completed within 19.805611247s
	I1108 00:13:19.339687   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.342291   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342623   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.342648   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342838   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.343019   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343147   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343323   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.343483   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:19.343856   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:19.343868   50613 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:19.465645   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402399.415738784
	
	I1108 00:13:19.465670   50613 fix.go:206] guest clock: 1699402399.415738784
	I1108 00:13:19.465681   50613 fix.go:219] Guest: 2023-11-08 00:13:19.415738784 +0000 UTC Remote: 2023-11-08 00:13:19.339668655 +0000 UTC m=+237.442917453 (delta=76.070129ms)
	I1108 00:13:19.465704   50613 fix.go:190] guest clock delta is within tolerance: 76.070129ms
	I1108 00:13:19.465710   50613 start.go:83] releasing machines lock for "embed-certs-253253", held for 19.931686858s
	I1108 00:13:19.465738   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.465996   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:19.468862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469185   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.469223   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469365   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.469898   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470091   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470174   50613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:19.470215   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.470300   50613 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:19.470321   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.473140   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473517   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473562   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473594   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473612   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473777   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473843   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474004   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474007   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474153   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.474192   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474344   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.565638   50613 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:19.591686   50613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:19.747192   50613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:19.755053   50613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:19.755134   50613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:19.774522   50613 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:19.774551   50613 start.go:472] detecting cgroup driver to use...
	I1108 00:13:19.774652   50613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:19.795492   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:19.809888   50613 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:19.809958   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:19.823108   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:19.835588   50613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:19.940017   50613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:20.075405   50613 docker.go:219] disabling docker service ...
	I1108 00:13:20.075460   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:20.090949   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:20.103551   50613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:20.226887   50613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:20.352088   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:20.367626   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:20.388084   50613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:20.388153   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.398506   50613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:20.398573   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.408335   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.417991   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.427599   50613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:20.439537   50613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:20.450914   50613 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:20.450972   50613 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:20.464456   50613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:20.475133   50613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:20.586162   50613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:20.799540   50613 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:20.799615   50613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:20.808503   50613 start.go:540] Will wait 60s for crictl version
	I1108 00:13:20.808551   50613 ssh_runner.go:195] Run: which crictl
	I1108 00:13:20.812371   50613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:20.853073   50613 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:20.853166   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.904737   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.958281   50613 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:20.959792   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:20.962399   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.962740   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:20.962775   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.963037   50613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:20.967403   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.980199   50613 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:20.980261   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:21.024679   50613 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:21.024757   50613 ssh_runner.go:195] Run: which lz4
	I1108 00:13:21.028861   50613 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:21.032736   50613 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:21.032762   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:19.898602   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1108 00:13:19.898655   50505 cache_images.go:123] Successfully loaded all cached images
	I1108 00:13:19.898663   50505 cache_images.go:92] LoadImages completed in 18.919280882s
	I1108 00:13:19.898742   50505 ssh_runner.go:195] Run: crio config
	I1108 00:13:19.970909   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:19.970936   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:19.970958   50505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:19.970986   50505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320390 NodeName:no-preload-320390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:19.971171   50505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320390"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:19.971273   50505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-320390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:19.971347   50505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:19.984469   50505 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:19.984551   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:19.995491   50505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1108 00:13:20.013609   50505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:20.031507   50505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1108 00:13:20.051978   50505 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:20.057139   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.071438   50505 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390 for IP: 192.168.61.176
	I1108 00:13:20.071471   50505 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:20.071635   50505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:20.071691   50505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:20.071782   50505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.key
	I1108 00:13:20.071848   50505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key.492ad1cf
	I1108 00:13:20.071899   50505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key
	I1108 00:13:20.072026   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:20.072064   50505 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:20.072080   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:20.072130   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:20.072167   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:20.072205   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:20.072260   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:20.073092   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:20.099422   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:20.126257   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:20.153126   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:20.184849   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:20.215515   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:20.247686   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:20.277712   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:20.304438   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:20.330321   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:20.361411   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:20.390456   50505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:20.410634   50505 ssh_runner.go:195] Run: openssl version
	I1108 00:13:20.418597   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:20.431853   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438127   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438271   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.445644   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:20.456959   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:20.466413   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472311   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472365   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.477965   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:20.487454   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:20.496731   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502531   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502591   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.509683   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:20.520960   50505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:20.525545   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:20.531367   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:20.537422   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:20.543607   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:20.548942   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:20.554419   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:20.559633   50505 kubeadm.go:404] StartCluster: {Name:no-preload-320390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:20.559719   50505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:20.559766   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:20.603718   50505 cri.go:89] found id: ""
	I1108 00:13:20.603795   50505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:20.613389   50505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:20.613418   50505 kubeadm.go:636] restartCluster start
	I1108 00:13:20.613476   50505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:20.622276   50505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.623645   50505 kubeconfig.go:92] found "no-preload-320390" server: "https://192.168.61.176:8443"
	I1108 00:13:20.626874   50505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:20.638188   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.638238   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.649536   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.649553   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.649610   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.660145   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.160858   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.160936   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.174163   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.660441   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.660526   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.675795   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.160281   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.160358   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.175777   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.660249   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.660328   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.675747   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.160280   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.160360   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.174686   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.661260   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.661343   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.675936   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.160440   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.160558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.174501   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.180066   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180534   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.180492   51635 retry.go:31] will retry after 320.996627ms: waiting for machine to come up
	I1108 00:13:21.503202   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503750   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.503689   51635 retry.go:31] will retry after 431.944242ms: waiting for machine to come up
	I1108 00:13:21.937564   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938025   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.937972   51635 retry.go:31] will retry after 592.354358ms: waiting for machine to come up
	I1108 00:13:22.531850   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:22.532272   51635 retry.go:31] will retry after 589.753727ms: waiting for machine to come up
	I1108 00:13:23.124275   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124784   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.124746   51635 retry.go:31] will retry after 596.910282ms: waiting for machine to come up
	I1108 00:13:23.722967   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723389   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723419   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.723349   51635 retry.go:31] will retry after 793.320391ms: waiting for machine to come up
	I1108 00:13:24.518525   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518953   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518985   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:24.518914   51635 retry.go:31] will retry after 1.247294281s: waiting for machine to come up
	I1108 00:13:25.768137   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768634   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:25.768541   51635 retry.go:31] will retry after 1.468389149s: waiting for machine to come up
	I1108 00:13:22.802292   50613 crio.go:444] Took 1.773480 seconds to copy over tarball
	I1108 00:13:22.802374   50613 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:25.811996   50613 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009592787s)
	I1108 00:13:25.812027   50613 crio.go:451] Took 3.009706 seconds to extract the tarball
	I1108 00:13:25.812036   50613 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:25.852011   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:25.903032   50613 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:25.903055   50613 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:25.903160   50613 ssh_runner.go:195] Run: crio config
	I1108 00:13:25.964562   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:25.964585   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:25.964601   50613 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:25.964618   50613 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-253253 NodeName:embed-certs-253253 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:25.964768   50613 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-253253"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:25.964869   50613 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-253253 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:25.964931   50613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:25.973956   50613 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:25.974031   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:25.982070   50613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 00:13:26.001066   50613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:26.020258   50613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1108 00:13:26.039418   50613 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:26.043133   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:26.055865   50613 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253 for IP: 192.168.39.159
	I1108 00:13:26.055902   50613 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:26.056069   50613 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:26.056268   50613 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:26.056374   50613 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/client.key
	I1108 00:13:26.128533   50613 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key.b15c5797
	I1108 00:13:26.128666   50613 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key
	I1108 00:13:26.128842   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:26.128884   50613 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:26.128895   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:26.128930   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:26.128953   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:26.128975   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:26.129016   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:26.129621   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:26.153776   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:26.179006   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:26.202199   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:26.225241   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:26.247745   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:26.270546   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:26.297075   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:26.320835   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:26.344068   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:26.367085   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:26.391491   50613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:26.408055   50613 ssh_runner.go:195] Run: openssl version
	I1108 00:13:26.413824   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:26.423666   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428281   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428332   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.433901   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:26.443832   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:26.453722   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458290   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458341   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.464035   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:26.473908   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:26.483600   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488053   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488113   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.493571   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:26.503466   50613 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:26.508047   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:26.514165   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:26.520278   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:26.526421   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:26.532388   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:26.538323   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:26.544215   50613 kubeadm.go:404] StartCluster: {Name:embed-certs-253253 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:26.544287   50613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:26.544330   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:26.586501   50613 cri.go:89] found id: ""
	I1108 00:13:26.586578   50613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:26.596647   50613 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:26.596676   50613 kubeadm.go:636] restartCluster start
	I1108 00:13:26.596734   50613 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:26.605901   50613 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.607305   50613 kubeconfig.go:92] found "embed-certs-253253" server: "https://192.168.39.159:8443"
	I1108 00:13:26.610434   50613 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:26.619238   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.619291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.630724   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.630746   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.630787   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.641997   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.660263   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.660349   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.675197   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.160678   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.160774   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.172593   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.660613   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.660696   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.672242   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.160884   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.160978   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.174734   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.660269   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.660337   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.671721   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.160250   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.160344   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.171104   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.660667   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.660729   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.671899   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.160408   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.160471   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.170733   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.660264   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.660338   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.671482   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.161084   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.161163   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.172174   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.238049   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238487   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238518   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:27.238428   51635 retry.go:31] will retry after 1.602246301s: waiting for machine to come up
	I1108 00:13:28.842785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843235   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843259   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:28.843188   51635 retry.go:31] will retry after 2.218327688s: waiting for machine to come up
	I1108 00:13:27.142567   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.242647   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.256767   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.642212   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.642306   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.654185   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.142751   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.142832   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.154141   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.642738   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.642817   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.654476   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.143085   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.143168   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.154553   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.642422   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.642499   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.658048   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.142497   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.142568   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.153710   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.642216   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.642291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.658036   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.142547   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.142634   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.159124   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.642720   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.642810   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.654593   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.660882   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.660944   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.675528   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.161058   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.161121   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.171493   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.638722   50505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:30.638762   50505 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:30.638776   50505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:30.638825   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:30.677982   50505 cri.go:89] found id: ""
	I1108 00:13:30.678064   50505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:30.693650   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:30.702679   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:30.702757   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711179   50505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711212   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:30.843638   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:31.970868   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.127188218s)
	I1108 00:13:31.970904   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.167903   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.242076   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.324914   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:32.325001   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.342576   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.861296   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.360958   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.861308   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:31.062973   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063465   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:31.063370   51635 retry.go:31] will retry after 2.935881965s: waiting for machine to come up
	I1108 00:13:34.002009   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002456   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:34.002385   51635 retry.go:31] will retry after 2.918632194s: waiting for machine to come up
	I1108 00:13:32.142573   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.142668   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.156513   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:32.643129   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.643203   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.654790   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.143023   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.143114   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.159475   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.642631   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.642728   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.658632   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.142142   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.142218   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.158375   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.642356   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.642437   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.657692   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.142180   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.142276   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.157616   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.642121   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.642194   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.656642   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.142162   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:36.142270   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:36.153340   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.619909   50613 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:36.619941   50613 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:36.619958   50613 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:36.620035   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:36.656935   50613 cri.go:89] found id: ""
	I1108 00:13:36.657008   50613 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:36.671784   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:36.680073   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:36.680120   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688560   50613 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688575   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:36.802484   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:34.361558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.860720   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.881793   50505 api_server.go:72] duration metric: took 2.55688905s to wait for apiserver process to appear ...
	I1108 00:13:34.881823   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:34.881843   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.396447   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.396488   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.396503   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.471135   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.471165   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.971845   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.977126   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:38.977163   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.472030   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.477778   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:39.477810   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.971333   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.977224   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:13:39.987415   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:39.987446   50505 api_server.go:131] duration metric: took 5.10561478s to wait for apiserver health ...
	I1108 00:13:39.987456   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:39.987465   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:39.989270   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:36.922427   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922874   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922916   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:36.922824   51635 retry.go:31] will retry after 3.960656744s: waiting for machine to come up
	I1108 00:13:40.886022   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Found IP for machine: 192.168.72.116
	I1108 00:13:40.886591   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has current primary IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886601   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserving static IP address...
	I1108 00:13:40.886974   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.887012   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | skip adding static IP to network mk-default-k8s-diff-port-039263 - found existing host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"}
	I1108 00:13:40.887037   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Getting to WaitForSSH function...
	I1108 00:13:40.887058   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserved static IP address: 192.168.72.116
	I1108 00:13:40.887072   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for SSH to be available...
	I1108 00:13:40.889373   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889771   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.889803   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889991   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH client type: external
	I1108 00:13:40.890018   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa (-rw-------)
	I1108 00:13:40.890054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:40.890068   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | About to run SSH command:
	I1108 00:13:40.890082   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | exit 0
	I1108 00:13:37.573684   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.781978   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.863424   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.935306   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:37.935377   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:37.947059   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.458806   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.959076   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.459045   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.959244   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.458249   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.480623   50613 api_server.go:72] duration metric: took 2.545315304s to wait for apiserver process to appear ...
	I1108 00:13:40.480650   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:40.480668   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:42.285976   50022 start.go:369] acquired machines lock for "old-k8s-version-590541" in 56.809842177s
	I1108 00:13:42.286028   50022 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:42.286039   50022 fix.go:54] fixHost starting: 
	I1108 00:13:42.286455   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:42.286492   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:42.305869   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I1108 00:13:42.306363   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:42.306845   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:13:42.306871   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:42.307221   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:42.307548   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:13:42.307740   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:13:42.309513   50022 fix.go:102] recreateIfNeeded on old-k8s-version-590541: state=Stopped err=<nil>
	I1108 00:13:42.309539   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	W1108 00:13:42.309706   50022 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:42.311819   50022 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-590541" ...
	I1108 00:13:40.997357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:40.997688   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetConfigRaw
	I1108 00:13:40.998394   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.001148   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001578   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.001612   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001940   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:13:41.002174   51228 machine.go:88] provisioning docker machine ...
	I1108 00:13:41.002197   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:41.002421   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002577   51228 buildroot.go:166] provisioning hostname "default-k8s-diff-port-039263"
	I1108 00:13:41.002600   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002800   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.005167   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005544   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.005584   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005873   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.006029   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006291   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.006425   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.007012   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.007036   51228 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-039263 && echo "default-k8s-diff-port-039263" | sudo tee /etc/hostname
	I1108 00:13:41.168664   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039263
	
	I1108 00:13:41.168698   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.171709   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172090   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.172132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172266   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.172457   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172650   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172867   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.173130   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.173626   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.173654   51228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-039263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-039263/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-039263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:41.324510   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:41.324539   51228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:41.324586   51228 buildroot.go:174] setting up certificates
	I1108 00:13:41.324598   51228 provision.go:83] configureAuth start
	I1108 00:13:41.324610   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.324933   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.327797   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.328213   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.330558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.330921   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.330955   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.331062   51228 provision.go:138] copyHostCerts
	I1108 00:13:41.331128   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:41.331150   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:41.331222   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:41.331337   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:41.331355   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:41.331387   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:41.331467   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:41.331479   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:41.331506   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:41.331592   51228 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-039263 san=[192.168.72.116 192.168.72.116 localhost 127.0.0.1 minikube default-k8s-diff-port-039263]
	I1108 00:13:41.452051   51228 provision.go:172] copyRemoteCerts
	I1108 00:13:41.452123   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:41.452156   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.454755   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455056   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.455089   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455288   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.455512   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.455704   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.455831   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:41.554387   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:41.586357   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:41.616703   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1108 00:13:41.646461   51228 provision.go:86] duration metric: configureAuth took 321.850044ms
	I1108 00:13:41.646489   51228 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:41.646730   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:41.646825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.650386   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.650813   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.650856   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.651031   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.651232   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651422   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.651763   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.652302   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.652325   51228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:42.006373   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:42.006401   51228 machine.go:91] provisioned docker machine in 1.004212938s
	I1108 00:13:42.006414   51228 start.go:300] post-start starting for "default-k8s-diff-port-039263" (driver="kvm2")
	I1108 00:13:42.006426   51228 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:42.006445   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.006785   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:42.006811   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.009619   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.009950   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.009986   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.010127   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.010344   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.010515   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.010673   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.106366   51228 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:42.110676   51228 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:42.110701   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:42.110770   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:42.110869   51228 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:42.110972   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:42.121223   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:42.146966   51228 start.go:303] post-start completed in 140.536976ms
	I1108 00:13:42.146996   51228 fix.go:56] fixHost completed within 22.681133015s
	I1108 00:13:42.147019   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.149705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.150165   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150406   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.150606   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150818   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150988   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.151156   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:42.151511   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:42.151523   51228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:42.285789   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402422.233004693
	
	I1108 00:13:42.285815   51228 fix.go:206] guest clock: 1699402422.233004693
	I1108 00:13:42.285823   51228 fix.go:219] Guest: 2023-11-08 00:13:42.233004693 +0000 UTC Remote: 2023-11-08 00:13:42.146999966 +0000 UTC m=+101.273648910 (delta=86.004727ms)
	I1108 00:13:42.285869   51228 fix.go:190] guest clock delta is within tolerance: 86.004727ms
	I1108 00:13:42.285877   51228 start.go:83] releasing machines lock for "default-k8s-diff-port-039263", held for 22.820045752s
	I1108 00:13:42.285913   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.286161   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:42.288711   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289095   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.289133   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289241   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.289864   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290109   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290209   51228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:42.290261   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.290323   51228 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:42.290345   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.293063   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293219   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293451   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293483   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293570   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293599   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.293878   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.293887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.294075   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.294085   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294234   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294280   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.294336   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.386493   51228 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:42.411009   51228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:42.558200   51228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:42.566040   51228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:42.566116   51228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:42.584775   51228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:42.584800   51228 start.go:472] detecting cgroup driver to use...
	I1108 00:13:42.584872   51228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:42.598720   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:42.612836   51228 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:42.612927   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:42.627474   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:42.641670   51228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:42.753616   51228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:42.888608   51228 docker.go:219] disabling docker service ...
	I1108 00:13:42.888680   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:42.903298   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:42.920184   51228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:43.054621   51228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:43.181836   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:43.198481   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:43.219759   51228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:43.219827   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.231137   51228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:43.231221   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.242206   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.253506   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.264311   51228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:43.276451   51228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:43.288448   51228 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:43.288522   51228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:43.305986   51228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:43.318366   51228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:43.479739   51228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:43.705223   51228 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:43.705302   51228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:43.711842   51228 start.go:540] Will wait 60s for crictl version
	I1108 00:13:43.711915   51228 ssh_runner.go:195] Run: which crictl
	I1108 00:13:43.717688   51228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:43.762492   51228 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:43.762651   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.814548   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.870144   51228 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:39.990811   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:40.020162   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:40.064758   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:40.081652   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:13:40.081705   50505 system_pods.go:61] "coredns-5dd5756b68-lhnz5" [936252ee-4f00-49e2-96e4-7c4f4a4ca378] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:40.081725   50505 system_pods.go:61] "etcd-no-preload-320390" [95e08672-dc80-4aa6-bd4a-e5f77bfc4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:40.081738   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [3261561e-b7d5-4302-8e0b-301d00407e8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:40.081748   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [b87602fd-b248-4529-9116-1851a4284bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:40.081763   50505 system_pods.go:61] "kube-proxy-c4mbm" [33806b69-57c0-4807-849b-b6a4f8a5db12] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:40.081777   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [4f7b4160-b99e-4f76-9b12-c5b1849c91b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:40.081791   50505 system_pods.go:61] "metrics-server-57f55c9bc5-th89c" [06aea7c0-065b-44a4-8d53-432f5722e937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:40.081810   50505 system_pods.go:61] "storage-provisioner" [c7b0810b-1ba7-4d56-ad97-3f04d771960d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:40.081823   50505 system_pods.go:74] duration metric: took 17.024016ms to wait for pod list to return data ...
	I1108 00:13:40.081836   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:40.093789   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:40.093827   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:40.093841   50505 node_conditions.go:105] duration metric: took 11.998569ms to run NodePressure ...
	I1108 00:13:40.093863   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:40.340962   50505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346004   50505 kubeadm.go:787] kubelet initialised
	I1108 00:13:40.346032   50505 kubeadm.go:788] duration metric: took 5.042344ms waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346044   50505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:40.355648   50505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:42.377985   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:42.313355   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Start
	I1108 00:13:42.313526   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring networks are active...
	I1108 00:13:42.314176   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network default is active
	I1108 00:13:42.314638   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network mk-old-k8s-version-590541 is active
	I1108 00:13:42.315060   50022 main.go:141] libmachine: (old-k8s-version-590541) Getting domain xml...
	I1108 00:13:42.315833   50022 main.go:141] libmachine: (old-k8s-version-590541) Creating domain...
	I1108 00:13:43.739499   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting to get IP...
	I1108 00:13:43.740647   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.741195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.741259   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.741155   51822 retry.go:31] will retry after 195.621332ms: waiting for machine to come up
	I1108 00:13:43.938557   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.939127   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.939268   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.939200   51822 retry.go:31] will retry after 278.651736ms: waiting for machine to come up
	I1108 00:13:44.219831   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.220473   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.220500   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.220418   51822 retry.go:31] will retry after 384.748872ms: waiting for machine to come up
	I1108 00:13:44.607110   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.607665   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.607696   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.607591   51822 retry.go:31] will retry after 401.60668ms: waiting for machine to come up
	I1108 00:13:43.871596   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:43.874814   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875307   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:43.875357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875575   51228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:43.880324   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:43.895271   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:43.895331   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:43.943120   51228 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:43.943238   51228 ssh_runner.go:195] Run: which lz4
	I1108 00:13:43.947723   51228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:43.952328   51228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:43.952365   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:45.857547   51228 crio.go:444] Took 1.909852 seconds to copy over tarball
	I1108 00:13:45.857623   51228 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:45.314087   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.314125   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.314144   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.333352   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.333384   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.833959   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.852530   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:45.852613   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.333996   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.346680   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:46.346714   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.833955   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.841287   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:13:46.853271   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:46.853299   50613 api_server.go:131] duration metric: took 6.372641273s to wait for apiserver health ...
	I1108 00:13:46.853310   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:46.853318   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:46.855336   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:46.856955   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:46.892049   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:46.933039   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:44.399678   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.879110   50505 pod_ready.go:92] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.879142   50505 pod_ready.go:81] duration metric: took 5.523463579s waiting for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.879154   50505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885356   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.885377   50505 pod_ready.go:81] duration metric: took 6.21581ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885385   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:47.914308   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.011074   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.011525   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.011560   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.011500   51822 retry.go:31] will retry after 708.154492ms: waiting for machine to come up
	I1108 00:13:45.720911   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.721383   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.721418   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.721294   51822 retry.go:31] will retry after 746.365542ms: waiting for machine to come up
	I1108 00:13:46.469031   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:46.469615   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:46.469641   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:46.469556   51822 retry.go:31] will retry after 924.305758ms: waiting for machine to come up
	I1108 00:13:47.395756   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:47.396297   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:47.396323   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:47.396241   51822 retry.go:31] will retry after 1.343866256s: waiting for machine to come up
	I1108 00:13:48.741427   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:48.741851   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:48.741883   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:48.741816   51822 retry.go:31] will retry after 1.388849147s: waiting for machine to come up
	I1108 00:13:49.625178   51228 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.76753046s)
	I1108 00:13:49.625229   51228 crio.go:451] Took 3.767633 seconds to extract the tarball
	I1108 00:13:49.625242   51228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:49.670263   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:49.727650   51228 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:49.727677   51228 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:49.727747   51228 ssh_runner.go:195] Run: crio config
	I1108 00:13:49.811565   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:13:49.811592   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:49.811615   51228 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:49.811639   51228 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.116 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-039263 NodeName:default-k8s-diff-port-039263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:49.811812   51228 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-039263"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:49.811906   51228 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-039263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1108 00:13:49.811984   51228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:49.822961   51228 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:49.823027   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:49.832632   51228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1108 00:13:49.850812   51228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:49.869345   51228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1108 00:13:49.887645   51228 ssh_runner.go:195] Run: grep 192.168.72.116	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:49.892538   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:49.907166   51228 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263 for IP: 192.168.72.116
	I1108 00:13:49.907205   51228 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:49.907374   51228 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:49.907425   51228 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:49.907523   51228 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.key
	I1108 00:13:49.907601   51228 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key.b2cbdf93
	I1108 00:13:49.907658   51228 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key
	I1108 00:13:49.907807   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:49.907851   51228 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:49.907872   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:49.907915   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:49.907951   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:49.907988   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:49.908046   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:49.908955   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:49.938941   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:49.964654   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:49.991354   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:50.018895   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:50.048330   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:50.076095   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:50.103752   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:50.130140   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:50.156862   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:50.181994   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:50.208069   51228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:50.226069   51228 ssh_runner.go:195] Run: openssl version
	I1108 00:13:50.232941   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:50.246981   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.252981   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.253059   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.260626   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:50.274135   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:50.285611   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290761   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290837   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.297508   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:50.308772   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:50.320122   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326021   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326083   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.332534   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:50.344381   51228 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:50.350040   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:50.356282   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:50.362850   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:50.378237   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:50.385607   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:50.392272   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:50.399220   51228 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port
-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:50.399304   51228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:50.399358   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:50.449693   51228 cri.go:89] found id: ""
	I1108 00:13:50.449770   51228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:50.460225   51228 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:50.460256   51228 kubeadm.go:636] restartCluster start
	I1108 00:13:50.460313   51228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:50.469777   51228 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.470973   51228 kubeconfig.go:92] found "default-k8s-diff-port-039263" server: "https://192.168.72.116:8444"
	I1108 00:13:50.473778   51228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:50.482964   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.483022   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.495100   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.495123   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.495186   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.508735   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:46.949012   50613 system_pods.go:59] 9 kube-system pods found
	I1108 00:13:46.950252   50613 system_pods.go:61] "coredns-5dd5756b68-7djdr" [a1459bf3-703b-418a-bc22-c98e285c6e31] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950302   50613 system_pods.go:61] "coredns-5dd5756b68-8qjbd" [fa7b05fd-725b-4c9c-815e-360f2bef8ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950336   50613 system_pods.go:61] "etcd-embed-certs-253253" [2631ed7d-3af4-4848-bbb8-c77038f8a1f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:46.950369   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [80b3e8da-6474-4fd8-bb86-0d9cc70086ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:46.950391   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [ee19def3-043a-4832-8153-52aaf8b4748a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:46.950407   50613 system_pods.go:61] "kube-proxy-rsgkf" [509d66e3-b034-4dcd-a16e-b2f93b9efa6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:46.950482   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [ef7bb9c3-98c8-45d8-8f54-852fb639b408] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:46.950497   50613 system_pods.go:61] "metrics-server-57f55c9bc5-s7ldx" [61cd423c-edbd-4d0c-87e8-1ac8e52c70e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:46.950507   50613 system_pods.go:61] "storage-provisioner" [d6157b7c-6b52-4ca8-a935-d68a0291305f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:46.950519   50613 system_pods.go:74] duration metric: took 17.457991ms to wait for pod list to return data ...
	I1108 00:13:46.950532   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:46.956062   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:46.956142   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:46.956165   50613 node_conditions.go:105] duration metric: took 5.622732ms to run NodePressure ...
	I1108 00:13:46.956193   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:47.272695   50613 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280001   50613 kubeadm.go:787] kubelet initialised
	I1108 00:13:47.280031   50613 kubeadm.go:788] duration metric: took 7.30064ms waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280041   50613 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:47.290043   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.378703   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:50.370740   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:51.912802   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.912845   50505 pod_ready.go:81] duration metric: took 6.027451924s waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.912861   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920043   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.920073   50505 pod_ready.go:81] duration metric: took 7.195906ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920085   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927863   50505 pod_ready.go:92] pod "kube-proxy-c4mbm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.927887   50505 pod_ready.go:81] duration metric: took 7.793258ms waiting for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927900   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934444   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.934470   50505 pod_ready.go:81] duration metric: took 6.560509ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934481   50505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.131947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:50.132491   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:50.132526   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:50.132397   51822 retry.go:31] will retry after 1.410573405s: waiting for machine to come up
	I1108 00:13:51.544674   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:51.545073   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:51.545099   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:51.545025   51822 retry.go:31] will retry after 1.773802671s: waiting for machine to come up
	I1108 00:13:53.320381   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:53.320863   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:53.320893   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:53.320805   51822 retry.go:31] will retry after 3.166868207s: waiting for machine to come up
	I1108 00:13:51.009734   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.009825   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.026052   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:51.509697   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.509786   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.527840   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.009557   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.009656   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.025049   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.509606   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.509707   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.526174   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.008803   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.008954   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.022472   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.508900   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.509005   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.525225   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.009884   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.009974   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.022171   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.509280   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.509376   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.522041   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.009670   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.009752   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.023035   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.509640   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.509717   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.526730   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.836317   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:53.332094   50613 pod_ready.go:92] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.332121   50613 pod_ready.go:81] duration metric: took 6.042047013s waiting for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.332133   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337858   50613 pod_ready.go:92] pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.337882   50613 pod_ready.go:81] duration metric: took 5.740229ms waiting for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337894   50613 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:55.356131   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:54.323357   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.328874   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.820773   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.490058   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:56.490553   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:56.490590   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:56.490511   51822 retry.go:31] will retry after 3.18441493s: waiting for machine to come up
	I1108 00:13:56.009549   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.009646   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.024559   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:56.508912   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.509015   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.521861   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.009408   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.009479   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.022156   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.509466   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.509554   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.522766   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.008909   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.009026   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.021521   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.509050   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.509134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.521387   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.008889   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.008975   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.021781   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.509489   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.509575   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.521581   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.009117   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:14:00.009196   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:00.022210   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.483934   51228 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:00.483990   51228 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:00.484004   51228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:00.484066   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:00.528120   51228 cri.go:89] found id: ""
	I1108 00:14:00.528178   51228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:00.544876   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:00.553827   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:00.553883   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562695   51228 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562721   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:00.676044   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:57.856242   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.855444   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.855471   50613 pod_ready.go:81] duration metric: took 5.517568786s waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.855479   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860431   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.860453   50613 pod_ready.go:81] duration metric: took 4.966273ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860464   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865854   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.865874   50613 pod_ready.go:81] duration metric: took 5.40177ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865914   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870805   50613 pod_ready.go:92] pod "kube-proxy-rsgkf" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.870826   50613 pod_ready.go:81] duration metric: took 4.898411ms waiting for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870835   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958009   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.958034   50613 pod_ready.go:81] duration metric: took 87.190501ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958052   50613 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:01.265674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:00.823696   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:03.322129   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:59.678086   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:59.678579   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:59.678598   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:59.678528   51822 retry.go:31] will retry after 4.30352873s: waiting for machine to come up
	I1108 00:14:03.983994   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984437   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has current primary IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984474   50022 main.go:141] libmachine: (old-k8s-version-590541) Found IP for machine: 192.168.50.49
	I1108 00:14:03.984489   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserving static IP address...
	I1108 00:14:03.984947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.984981   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | skip adding static IP to network mk-old-k8s-version-590541 - found existing host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"}
	I1108 00:14:03.985000   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserved static IP address: 192.168.50.49
	I1108 00:14:03.985020   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting for SSH to be available...
	I1108 00:14:03.985034   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Getting to WaitForSSH function...
	I1108 00:14:03.987671   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988083   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.988116   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988388   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH client type: external
	I1108 00:14:03.988424   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa (-rw-------)
	I1108 00:14:03.988461   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:14:03.988481   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | About to run SSH command:
	I1108 00:14:03.988496   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | exit 0
	I1108 00:14:04.080867   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | SSH cmd err, output: <nil>: 
	I1108 00:14:04.081275   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetConfigRaw
	I1108 00:14:04.081955   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.085061   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085512   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.085554   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085942   50022 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/config.json ...
	I1108 00:14:04.086165   50022 machine.go:88] provisioning docker machine ...
	I1108 00:14:04.086188   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:04.086417   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086612   50022 buildroot.go:166] provisioning hostname "old-k8s-version-590541"
	I1108 00:14:04.086634   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086822   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.089431   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.089808   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.089838   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.090007   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.090201   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090362   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090535   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.090686   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.090991   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.091002   50022 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-590541 && echo "old-k8s-version-590541" | sudo tee /etc/hostname
	I1108 00:14:04.228526   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-590541
	
	I1108 00:14:04.228561   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.232020   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232390   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.232454   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232743   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.232930   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233109   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233264   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.233430   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.233786   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.233812   50022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-590541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-590541/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-590541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:14:04.370396   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:14:04.370424   50022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:14:04.370469   50022 buildroot.go:174] setting up certificates
	I1108 00:14:04.370487   50022 provision.go:83] configureAuth start
	I1108 00:14:04.370505   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.370779   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.373683   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374081   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.374111   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374240   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.377048   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377441   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.377469   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377596   50022 provision.go:138] copyHostCerts
	I1108 00:14:04.377658   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:14:04.377678   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:14:04.377748   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:14:04.377855   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:14:04.377867   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:14:04.377893   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:14:04.377965   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:14:04.377979   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:14:04.378005   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:14:04.378064   50022 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-590541 san=[192.168.50.49 192.168.50.49 localhost 127.0.0.1 minikube old-k8s-version-590541]
	I1108 00:14:04.534682   50022 provision.go:172] copyRemoteCerts
	I1108 00:14:04.534750   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:14:04.534778   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.538002   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538379   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.538408   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538639   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.538789   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.538975   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.539146   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:04.632308   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:14:01.961492   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.285410864s)
	I1108 00:14:01.961529   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.165604   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.235655   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.352126   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:02.352212   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.370538   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.884696   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.384139   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.884529   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.384134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.884877   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.913244   51228 api_server.go:72] duration metric: took 2.56112461s to wait for apiserver process to appear ...
	I1108 00:14:04.913273   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:04.913295   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:04.657542   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:14:04.682815   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:14:04.709405   50022 provision.go:86] duration metric: configureAuth took 338.902281ms
	I1108 00:14:04.709439   50022 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:14:04.709651   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:14:04.709741   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.713141   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713520   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.713561   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713718   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.713923   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714108   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714259   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.714497   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.714885   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.714905   50022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:14:05.055346   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:14:05.055427   50022 machine.go:91] provisioned docker machine in 969.247821ms
	I1108 00:14:05.055446   50022 start.go:300] post-start starting for "old-k8s-version-590541" (driver="kvm2")
	I1108 00:14:05.055459   50022 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:14:05.055493   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.055841   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:14:05.055895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.058959   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059423   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.059457   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059601   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.059775   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.059895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.060042   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.151543   50022 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:14:05.155876   50022 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:14:05.155902   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:14:05.155969   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:14:05.156056   50022 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:14:05.156229   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:14:05.165742   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:05.190622   50022 start.go:303] post-start completed in 135.159333ms
	I1108 00:14:05.190648   50022 fix.go:56] fixHost completed within 22.904612851s
	I1108 00:14:05.190673   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.193716   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194165   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.194195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194480   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.194725   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.194929   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.195106   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.195260   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:05.195755   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:05.195778   50022 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:14:05.326443   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402445.269657345
	
	I1108 00:14:05.326467   50022 fix.go:206] guest clock: 1699402445.269657345
	I1108 00:14:05.326476   50022 fix.go:219] Guest: 2023-11-08 00:14:05.269657345 +0000 UTC Remote: 2023-11-08 00:14:05.190652611 +0000 UTC m=+370.589908297 (delta=79.004734ms)
	I1108 00:14:05.326524   50022 fix.go:190] guest clock delta is within tolerance: 79.004734ms
	I1108 00:14:05.326531   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 23.040527062s
	I1108 00:14:05.326558   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.326845   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:05.329775   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330225   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.330254   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330447   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331102   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331338   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331424   50022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:14:05.331467   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.331584   50022 ssh_runner.go:195] Run: cat /version.json
	I1108 00:14:05.331610   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.334586   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.334817   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335125   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335182   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335225   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335307   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335339   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335418   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335536   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335603   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.335774   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335783   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.335906   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.336063   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.423679   50022 ssh_runner.go:195] Run: systemctl --version
	I1108 00:14:05.446956   50022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:14:05.598713   50022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:14:05.605558   50022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:14:05.605641   50022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:14:05.620183   50022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:14:05.620211   50022 start.go:472] detecting cgroup driver to use...
	I1108 00:14:05.620277   50022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:14:05.635981   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:14:05.649637   50022 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:14:05.649699   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:14:05.664232   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:14:05.678205   50022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:14:05.791991   50022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:14:05.925002   50022 docker.go:219] disabling docker service ...
	I1108 00:14:05.925135   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:14:05.939853   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:14:05.955518   50022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:14:06.074872   50022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:14:06.189371   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:14:06.202247   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:14:06.219012   50022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1108 00:14:06.219082   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.229837   50022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:14:06.229911   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.239769   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.248633   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.257717   50022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:14:06.268893   50022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:14:06.277427   50022 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:14:06.277495   50022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:14:06.290771   50022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:14:06.299918   50022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:14:06.421038   50022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:14:06.587544   50022 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:14:06.587624   50022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:14:06.592726   50022 start.go:540] Will wait 60s for crictl version
	I1108 00:14:06.592781   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:06.596695   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:14:06.637642   50022 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:14:06.637733   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.690026   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.740455   50022 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1108 00:14:03.266720   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.764837   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.322160   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:07.329491   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:06.741799   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:06.744301   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744599   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:06.744630   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744861   50022 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1108 00:14:06.749385   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:06.762645   50022 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1108 00:14:06.762732   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:06.804386   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:06.804458   50022 ssh_runner.go:195] Run: which lz4
	I1108 00:14:06.808948   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:14:06.813319   50022 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:14:06.813355   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1108 00:14:08.476578   50022 crio.go:444] Took 1.667668 seconds to copy over tarball
	I1108 00:14:08.476646   50022 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:14:09.078810   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.078843   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.078859   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.140049   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.140083   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.641000   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.647216   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:09.647247   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.140446   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.148995   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:10.149028   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.640719   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.649076   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:14:10.660508   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:14:10.660545   51228 api_server.go:131] duration metric: took 5.747263547s to wait for apiserver health ...
	I1108 00:14:10.660556   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:14:10.660566   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:10.662644   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:10.664069   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:10.682131   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:10.709582   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:10.725779   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:14:10.725840   51228 system_pods.go:61] "coredns-5dd5756b68-rz9t4" [d7b24f41-ed9e-4b07-991b-8587f49d7902] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:14:10.725854   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [f58b5fbb-a565-4d47-8b3d-ea62169dc0fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:14:10.725868   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [d0c3391c-679f-49ad-a6ff-ef62d74a62ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:14:10.725882   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [33f54c9b-cc67-4662-8db9-c735fde4d9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:14:10.725903   51228 system_pods.go:61] "kube-proxy-z7b8g" [079a28b1-dbad-4e62-a9ea-b667206433cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:14:10.725914   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [629f940b-6d2a-4c3c-8a11-2805dc2c04d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:14:10.725927   51228 system_pods.go:61] "metrics-server-57f55c9bc5-nlhpn" [f5d69cb1-4266-45fc-9bab-57053f915aa0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:14:10.725941   51228 system_pods.go:61] "storage-provisioner" [fb6541da-3ed3-4abb-b534-643bb5faf7d3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:14:10.725953   51228 system_pods.go:74] duration metric: took 16.346941ms to wait for pod list to return data ...
	I1108 00:14:10.725965   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:10.730466   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:10.730555   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:10.730574   51228 node_conditions.go:105] duration metric: took 4.602969ms to run NodePressure ...
	I1108 00:14:10.730595   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:07.772448   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:10.267241   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:09.824633   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.829090   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.015104   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.781938   50022 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.305246635s)
	I1108 00:14:11.781979   50022 crio.go:451] Took 3.305377 seconds to extract the tarball
	I1108 00:14:11.781999   50022 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:14:11.837911   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:11.907599   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:11.907634   50022 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:14:11.907702   50022 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.907965   50022 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.907983   50022 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.907966   50022 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.908257   50022 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.908365   50022 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1108 00:14:11.909163   50022 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.909239   50022 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.909251   50022 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.909332   50022 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.909171   50022 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.909397   50022 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.909435   50022 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.909625   50022 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1108 00:14:12.040043   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.042004   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1108 00:14:12.047478   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.051016   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.095045   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.126645   50022 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1108 00:14:12.126718   50022 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.126788   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.133035   50022 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1108 00:14:12.133078   50022 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1108 00:14:12.133120   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.164621   50022 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1108 00:14:12.164686   50022 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.164754   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.182223   50022 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1108 00:14:12.182267   50022 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.182318   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201151   50022 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1108 00:14:12.201196   50022 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.201244   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201255   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.201306   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1108 00:14:12.201305   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.201341   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.203375   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.208529   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.341873   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1108 00:14:12.341901   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1108 00:14:12.341954   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.341960   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1108 00:14:12.356561   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1108 00:14:12.356663   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.361927   50022 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1108 00:14:12.361962   50022 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.362023   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.382770   50022 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1108 00:14:12.382819   50022 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.382864   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.406169   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1108 00:14:12.406213   50022 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1108 00:14:12.406228   50022 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406273   50022 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406313   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.406274   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.863910   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:14.488498   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0: (2.082152502s)
	I1108 00:14:14.488536   50022 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.082234083s)
	I1108 00:14:14.488548   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1108 00:14:14.488571   50022 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1108 00:14:14.488623   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0: (2.082249259s)
	I1108 00:14:14.488666   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1108 00:14:14.488711   50022 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.624766966s)
	I1108 00:14:14.488762   50022 cache_images.go:92] LoadImages completed in 2.581114029s
	W1108 00:14:14.488842   50022 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1108 00:14:14.488915   50022 ssh_runner.go:195] Run: crio config
	I1108 00:14:14.557127   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:14.557155   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:14.557176   50022 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:14:14.557204   50022 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.49 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-590541 NodeName:old-k8s-version-590541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1108 00:14:14.557391   50022 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-590541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-590541
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.49:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:14:14.557508   50022 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-590541 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:14:14.557579   50022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1108 00:14:14.568423   50022 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:14:14.568501   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:14:14.578581   50022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1108 00:14:14.596389   50022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:14:14.613956   50022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1108 00:14:14.631988   50022 ssh_runner.go:195] Run: grep 192.168.50.49	control-plane.minikube.internal$ /etc/hosts
	I1108 00:14:14.636236   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:14.648849   50022 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541 for IP: 192.168.50.49
	I1108 00:14:14.648888   50022 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:14:14.649071   50022 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:14:14.649126   50022 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:14:14.649231   50022 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.key
	I1108 00:14:14.649312   50022 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key.5b7c76e3
	I1108 00:14:14.649375   50022 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key
	I1108 00:14:14.649542   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:14:14.649587   50022 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:14:14.649597   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:14:14.649636   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:14:14.649677   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:14:14.649714   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:14:14.649771   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:11.058474   51228 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064805   51228 kubeadm.go:787] kubelet initialised
	I1108 00:14:11.064852   51228 kubeadm.go:788] duration metric: took 6.346592ms waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064863   51228 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:14:11.073499   51228 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.089759   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089791   51228 pod_ready.go:81] duration metric: took 16.257238ms waiting for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.089803   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089811   51228 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.100580   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100605   51228 pod_ready.go:81] duration metric: took 10.783802ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.100615   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100621   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.113797   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113826   51228 pod_ready.go:81] duration metric: took 13.195367ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.113838   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113847   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.124704   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124736   51228 pod_ready.go:81] duration metric: took 10.87946ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.124750   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124760   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915650   51228 pod_ready.go:92] pod "kube-proxy-z7b8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:11.915674   51228 pod_ready.go:81] duration metric: took 790.904941ms waiting for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915686   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:14.011244   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:12.537889   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.767882   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:16.322840   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.323955   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.650662   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:14:14.682536   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 00:14:14.708618   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:14:14.737947   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 00:14:14.768365   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:14:14.795469   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:14:14.824086   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:14:14.851375   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:14:14.878638   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:14:14.906647   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:14:14.933316   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:14:14.961937   50022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:14:14.980167   50022 ssh_runner.go:195] Run: openssl version
	I1108 00:14:14.986053   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:14:14.996201   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001410   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001490   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.008681   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:14:15.022034   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:14:15.031992   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037854   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037910   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.045107   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:14:15.057464   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:14:15.070137   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075848   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075917   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.083414   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:14:15.094499   50022 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:14:15.099437   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:14:15.105940   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:14:15.112527   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:14:15.118429   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:14:15.124769   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:14:15.130975   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:14:15.136772   50022 kubeadm.go:404] StartCluster: {Name:old-k8s-version-590541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:14:15.136903   50022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:14:15.136952   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:15.184018   50022 cri.go:89] found id: ""
	I1108 00:14:15.184095   50022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:14:15.196900   50022 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:14:15.196924   50022 kubeadm.go:636] restartCluster start
	I1108 00:14:15.196994   50022 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:14:15.208810   50022 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.210399   50022 kubeconfig.go:92] found "old-k8s-version-590541" server: "https://192.168.50.49:8443"
	I1108 00:14:15.214114   50022 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:14:15.223586   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.223644   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.234506   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.234525   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.234565   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.244971   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.745626   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.745698   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.757830   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.246012   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.246090   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.258583   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.745965   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.746045   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.758317   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.245985   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.246087   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.257615   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.745646   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.745715   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.757591   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.245666   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.245773   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.258225   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.745765   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.745842   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.756699   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:19.245946   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.246016   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.258255   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.222461   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.722269   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:18.722291   51228 pod_ready.go:81] duration metric: took 6.806598217s waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:18.722300   51228 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:20.739081   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:17.264976   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.265242   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:21.265825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:20.822592   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.321115   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.745997   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.746135   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.757885   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.245884   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.245988   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.258408   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.745963   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.746035   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.757892   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.246052   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.246133   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.258401   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.745947   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.746040   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.759160   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.246004   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.246075   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.258859   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.745787   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.745889   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.758099   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.245961   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.246068   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.258810   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.745167   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.745248   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.757093   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:24.245690   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.245751   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.258264   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.739380   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.739502   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.766235   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:26.264779   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:25.322215   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:27.322896   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.745944   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.746024   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.759229   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:25.224130   50022 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:25.224188   50022 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:25.224207   50022 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:25.224267   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:25.271348   50022 cri.go:89] found id: ""
	I1108 00:14:25.271418   50022 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:25.287540   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:25.296398   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:25.296452   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305111   50022 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305137   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:25.434385   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.361847   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.561621   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.667973   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.798155   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:26.798240   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:26.822210   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.335493   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.836175   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.336398   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.836400   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.862790   50022 api_server.go:72] duration metric: took 2.064638513s to wait for apiserver process to appear ...
	I1108 00:14:28.862814   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:28.862827   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:26.740013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.740958   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.266931   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:30.765036   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:29.827237   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:32.323375   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.863452   50022 api_server.go:269] stopped: https://192.168.50.49:8443/healthz: Get "https://192.168.50.49:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 00:14:33.863495   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:34.513495   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:34.513530   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:31.240440   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.739764   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.014492   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.020991   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.021019   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:35.514559   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.521451   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.521475   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:36.014620   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:36.021243   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:14:36.029191   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:14:36.029214   50022 api_server.go:131] duration metric: took 7.166394703s to wait for apiserver health ...
	I1108 00:14:36.029225   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:36.029232   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:36.030800   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:32.765436   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:34.825199   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.322438   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:36.032078   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:36.042827   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:36.062239   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:36.070373   50022 system_pods.go:59] 7 kube-system pods found
	I1108 00:14:36.070404   50022 system_pods.go:61] "coredns-5644d7b6d9-cmx8s" [510a3ae2-abff-40f9-8605-7fd6cc5316de] Running
	I1108 00:14:36.070414   50022 system_pods.go:61] "etcd-old-k8s-version-590541" [4597d43f-d424-4591-8a5c-6e4a7d60bb2b] Running
	I1108 00:14:36.070420   50022 system_pods.go:61] "kube-apiserver-old-k8s-version-590541" [353c1157-7cac-4809-91ea-30745ecbc10c] Running
	I1108 00:14:36.070427   50022 system_pods.go:61] "kube-controller-manager-old-k8s-version-590541" [30679f8f-aa28-4349-ada1-97af45c0c065] Running
	I1108 00:14:36.070432   50022 system_pods.go:61] "kube-proxy-r8p96" [21ac95e4-595f-4520-8174-ef5e1334c1be] Running
	I1108 00:14:36.070437   50022 system_pods.go:61] "kube-scheduler-old-k8s-version-590541" [f406d277-d786-417a-9428-8433143db81c] Running
	I1108 00:14:36.070443   50022 system_pods.go:61] "storage-provisioner" [26f85033-bd24-4332-ba8d-1aed49559417] Running
	I1108 00:14:36.070452   50022 system_pods.go:74] duration metric: took 8.188793ms to wait for pod list to return data ...
	I1108 00:14:36.070461   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:36.075209   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:36.075242   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:36.075259   50022 node_conditions.go:105] duration metric: took 4.788324ms to run NodePressure ...
	I1108 00:14:36.075286   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:36.310748   50022 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:36.319886   50022 retry.go:31] will retry after 259.644928ms: kubelet not initialised
	I1108 00:14:36.584728   50022 retry.go:31] will retry after 259.541836ms: kubelet not initialised
	I1108 00:14:36.851013   50022 retry.go:31] will retry after 319.229418ms: kubelet not initialised
	I1108 00:14:37.192544   50022 retry.go:31] will retry after 949.166954ms: kubelet not initialised
	I1108 00:14:38.149087   50022 retry.go:31] will retry after 1.159461481s: kubelet not initialised
	I1108 00:14:39.313777   50022 retry.go:31] will retry after 1.441288405s: kubelet not initialised
	I1108 00:14:36.240206   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:38.240974   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.739451   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.266643   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.267727   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.765636   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.323180   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.323278   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:43.821724   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.762380   50022 retry.go:31] will retry after 2.811416386s: kubelet not initialised
	I1108 00:14:43.579217   50022 retry.go:31] will retry after 4.427599597s: kubelet not initialised
	I1108 00:14:42.739823   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.238841   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:44.266015   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:46.766564   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.822389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:47.822637   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:48.011401   50022 retry.go:31] will retry after 9.583320686s: kubelet not initialised
	I1108 00:14:47.239708   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.739520   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.264876   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.265467   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:50.321858   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:52.823189   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.740005   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:54.239137   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:53.267904   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.767709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.321381   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.600096   50022 retry.go:31] will retry after 8.628668417s: kubelet not initialised
	I1108 00:14:56.242527   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.740775   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.742908   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.263898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.264487   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:59.822276   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.322959   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.744271   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:05.239364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.764787   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.767529   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.821706   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.822611   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:08.822950   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.235557   50022 retry.go:31] will retry after 18.967803661s: kubelet not initialised
	I1108 00:15:07.239957   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.243640   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:07.268913   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.765546   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:10.823397   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:13.320774   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:11.741381   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.239143   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:12.265009   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.265329   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.265470   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:15.322148   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:17.821371   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.740364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.742058   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.267349   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:20.763380   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:19.821495   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.822583   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.239196   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:23.239716   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.740472   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:22.764934   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.264695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:24.322074   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:26.324255   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:28.823261   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.208456   50022 kubeadm.go:787] kubelet initialised
	I1108 00:15:25.208482   50022 kubeadm.go:788] duration metric: took 48.897709945s waiting for restarted kubelet to initialise ...
	I1108 00:15:25.208492   50022 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:15:25.213730   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220419   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.220444   50022 pod_ready.go:81] duration metric: took 6.688227ms waiting for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220455   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225713   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.225734   50022 pod_ready.go:81] duration metric: took 5.271879ms waiting for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225742   50022 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231081   50022 pod_ready.go:92] pod "etcd-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.231102   50022 pod_ready.go:81] duration metric: took 5.353373ms waiting for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231113   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235653   50022 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.235676   50022 pod_ready.go:81] duration metric: took 4.554135ms waiting for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235687   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607677   50022 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.607702   50022 pod_ready.go:81] duration metric: took 372.006515ms waiting for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607715   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007866   50022 pod_ready.go:92] pod "kube-proxy-r8p96" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.007901   50022 pod_ready.go:81] duration metric: took 400.175462ms waiting for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007915   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.408998   50022 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.409023   50022 pod_ready.go:81] duration metric: took 401.100386ms waiting for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.409037   50022 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:28.714602   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.743907   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.242025   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.764799   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:29.765943   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:31.322316   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.821723   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.715349   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.213961   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.739648   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.238544   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.270073   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:34.764272   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.768065   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.322383   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:38.821688   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.215842   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.714618   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.239003   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.239229   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.266142   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.765225   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.822847   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.823419   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.214573   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.214623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.239832   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.740100   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.765773   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.767613   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.323162   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:47.323716   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:44.714312   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.714541   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.214939   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.238097   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.240079   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.740404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.266155   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.821171   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.821247   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.821754   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.715388   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.214072   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.239902   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.240606   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:52.764709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.765802   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.821843   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.822037   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:56.214628   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:58.215873   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.739805   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.742442   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.264640   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.265598   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:01.269674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.823743   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.321221   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:00.716761   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.717300   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.240157   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.740325   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:03.765956   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.266810   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.322200   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.325043   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.822004   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:05.214678   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:07.214757   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.741067   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.238455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.764592   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:10.764740   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.321882   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.323997   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.715347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:12.215814   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.238960   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.239188   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.239933   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.268590   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.767860   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.822286   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.323447   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:14.715001   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.214864   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:19.220945   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.743653   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.239877   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.267403   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.765825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.828982   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:23.322508   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:21.715604   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.215532   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.240232   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.240410   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.767921   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.266374   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.821672   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.323033   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.715605   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.215673   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.240493   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.739795   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:27.268851   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.765296   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.822234   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.822653   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:31.714216   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.714677   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.238984   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.239828   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.264549   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.765297   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.823243   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.321349   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.715073   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.715879   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.240347   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.739526   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.265284   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.764898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.322588   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:41.822017   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:40.214804   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.714783   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.238649   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.238830   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.265404   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.266352   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.763687   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.321389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.322294   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.822670   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:45.215415   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:47.715215   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.239884   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.740698   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:50.740725   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.765820   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.265744   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.321664   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.321945   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:49.715720   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:52.215540   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.239897   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.241013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.764035   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.767704   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.324156   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.821380   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:54.716014   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.213472   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.216084   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.740250   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.740808   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:58.264915   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:00.764064   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.823358   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.824897   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.827668   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.714273   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.714538   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.238718   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:04.239300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.766695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:05.268491   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.321926   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.822906   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.215268   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.215344   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.740893   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.240404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:07.764370   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.764952   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.765807   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.823030   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.320640   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.715494   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.214139   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.741308   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.741849   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:14.265117   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.265550   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.322703   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.822360   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.214808   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.214944   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:19.215663   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.239627   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.241991   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.742074   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.764043   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.764244   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.322245   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:22.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:21.715000   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.715813   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.240800   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.741203   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.264974   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.267122   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:24.823144   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.322674   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:26.215099   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.215710   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.242151   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.741098   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.765060   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.266360   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:29.821467   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:31.822093   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.714747   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.716931   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:33.241199   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.744300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.765221   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.766163   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.320569   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:36.321680   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.321803   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.215458   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.715660   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.241103   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.241689   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.264893   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:39.264980   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:41.764589   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.323069   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.822323   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.214357   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.215838   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.738943   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.738995   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.265516   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.764435   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.827347   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:47.321911   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.715762   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.716679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.214899   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.740204   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.766668   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.266657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.822604   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.823333   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.935354   50505 pod_ready.go:81] duration metric: took 4m0.000854035s waiting for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:51.935397   50505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:51.935438   50505 pod_ready.go:38] duration metric: took 4m11.589382956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:51.935470   50505 kubeadm.go:640] restartCluster took 4m31.32204509s
	W1108 00:17:51.935533   50505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:51.935560   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:17:51.715171   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.716530   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.244682   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.741272   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.743900   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.765757   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.766672   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:56.218347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.715621   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.246553   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:00.740366   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.265496   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.958296   50613 pod_ready.go:81] duration metric: took 4m0.000224971s waiting for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:58.958324   50613 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:58.958349   50613 pod_ready.go:38] duration metric: took 4m11.678298333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:58.958373   50613 kubeadm.go:640] restartCluster took 4m32.361691152s
	W1108 00:17:58.958429   50613 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:58.958455   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:01.214685   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.216848   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.239882   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:05.739403   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:06.321352   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.385768547s)
	I1108 00:18:06.321435   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:06.335385   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:06.345310   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:06.355261   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:06.355301   50505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:06.570938   50505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:05.715384   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.716056   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.739455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.740028   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.716612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:12.215477   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:11.742123   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:14.242024   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:15.847386   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.888899647s)
	I1108 00:18:15.847471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:15.865800   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:15.877857   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:15.888952   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:15.889014   50613 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:16.126155   50613 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:17.730060   50505 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:17.730164   50505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:17.730282   50505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:17.730411   50505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:17.730564   50505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:17.730648   50505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:17.732613   50505 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:17.732709   50505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:17.732788   50505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:17.732916   50505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:17.732995   50505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:17.733104   50505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:17.733186   50505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:17.733265   50505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:17.733344   50505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:17.733429   50505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:17.733526   50505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:17.733572   50505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:17.733640   50505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:17.733699   50505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:17.733763   50505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:17.733838   50505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:17.733905   50505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:17.734002   50505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:17.734088   50505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:17.735708   50505 out.go:204]   - Booting up control plane ...
	I1108 00:18:17.735808   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:17.735898   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:17.735981   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:17.736113   50505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:17.736209   50505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:17.736255   50505 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:17.736431   50505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:17.736517   50505 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503639 seconds
	I1108 00:18:17.736637   50505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:17.736779   50505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:17.736873   50505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:17.737093   50505 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-320390 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:17.737168   50505 kubeadm.go:322] [bootstrap-token] Using token: 8lntxi.1hule2axpc9kkhcs
	I1108 00:18:17.738763   50505 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:17.738904   50505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:17.739014   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:17.739197   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:17.739364   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:17.739534   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:17.739651   50505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:17.739781   50505 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:17.739829   50505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:17.739881   50505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:17.739889   50505 kubeadm.go:322] 
	I1108 00:18:17.739956   50505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:17.739964   50505 kubeadm.go:322] 
	I1108 00:18:17.740051   50505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:17.740065   50505 kubeadm.go:322] 
	I1108 00:18:17.740094   50505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:17.740165   50505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:17.740229   50505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:17.740239   50505 kubeadm.go:322] 
	I1108 00:18:17.740311   50505 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:17.740320   50505 kubeadm.go:322] 
	I1108 00:18:17.740375   50505 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:17.740385   50505 kubeadm.go:322] 
	I1108 00:18:17.740443   50505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:17.740528   50505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:17.740629   50505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:17.740640   50505 kubeadm.go:322] 
	I1108 00:18:17.740733   50505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:17.740840   50505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:17.740860   50505 kubeadm.go:322] 
	I1108 00:18:17.740959   50505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741077   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:17.741106   50505 kubeadm.go:322] 	--control-plane 
	I1108 00:18:17.741114   50505 kubeadm.go:322] 
	I1108 00:18:17.741207   50505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:17.741221   50505 kubeadm.go:322] 
	I1108 00:18:17.741312   50505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741435   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:17.741451   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:18:17.741460   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:17.742996   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:17.744307   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:17.800065   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:17.844561   50505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:17.844628   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:17.844636   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=no-preload-320390 minikube.k8s.io/updated_at=2023_11_08T00_18_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.268124   50505 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:18.268268   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.391271   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.999821   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:14.715492   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.716036   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:19.217395   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.739748   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:18.722551   51228 pod_ready.go:81] duration metric: took 4m0.000232672s waiting for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:18.722600   51228 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:18:18.722616   51228 pod_ready.go:38] duration metric: took 4m7.657742468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:18.722637   51228 kubeadm.go:640] restartCluster took 4m28.262375275s
	W1108 00:18:18.722722   51228 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:18:18.722756   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:19.500069   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.000575   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.500545   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.999918   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.499960   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.000673   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.499811   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.000501   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.499942   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.000407   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.217427   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:23.715751   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:27.224428   50613 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:27.224497   50613 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:27.224589   50613 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:27.224720   50613 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:27.224916   50613 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:27.225019   50613 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:27.226893   50613 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:27.227001   50613 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:27.227091   50613 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:27.227201   50613 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:27.227279   50613 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:27.227365   50613 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:27.227433   50613 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:27.227517   50613 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:27.227602   50613 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:27.227719   50613 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:27.227808   50613 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:27.227864   50613 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:27.227938   50613 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:27.228013   50613 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:27.228102   50613 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:27.228186   50613 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:27.228264   50613 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:27.228387   50613 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:27.228479   50613 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:27.229827   50613 out.go:204]   - Booting up control plane ...
	I1108 00:18:27.229950   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:27.230032   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:27.230124   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:27.230265   50613 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:27.230387   50613 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:27.230447   50613 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:27.230699   50613 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:27.230810   50613 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503846 seconds
	I1108 00:18:27.230970   50613 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:27.231145   50613 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:27.231237   50613 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:27.231478   50613 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-253253 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:27.231573   50613 kubeadm.go:322] [bootstrap-token] Using token: vyjibp.12wjj754q6czu5uo
	I1108 00:18:27.233159   50613 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:27.233266   50613 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:27.233340   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:27.233454   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:27.233558   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:27.233693   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:27.233793   50613 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:27.233943   50613 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:27.234012   50613 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:27.234074   50613 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:27.234086   50613 kubeadm.go:322] 
	I1108 00:18:27.234174   50613 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:27.234191   50613 kubeadm.go:322] 
	I1108 00:18:27.234300   50613 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:27.234310   50613 kubeadm.go:322] 
	I1108 00:18:27.234337   50613 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:27.234388   50613 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:27.234432   50613 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:27.234436   50613 kubeadm.go:322] 
	I1108 00:18:27.234490   50613 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:27.234507   50613 kubeadm.go:322] 
	I1108 00:18:27.234567   50613 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:27.234577   50613 kubeadm.go:322] 
	I1108 00:18:27.234651   50613 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:27.234756   50613 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:27.234858   50613 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:27.234873   50613 kubeadm.go:322] 
	I1108 00:18:27.234959   50613 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:27.235056   50613 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:27.235066   50613 kubeadm.go:322] 
	I1108 00:18:27.235184   50613 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235334   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:27.235369   50613 kubeadm.go:322] 	--control-plane 
	I1108 00:18:27.235378   50613 kubeadm.go:322] 
	I1108 00:18:27.235476   50613 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:27.235487   50613 kubeadm.go:322] 
	I1108 00:18:27.235585   50613 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235734   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:27.235751   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:18:27.235759   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:27.237411   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:24.499703   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.999659   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:25.499724   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.000534   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.500532   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.999903   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.500582   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.000156   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.500443   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.000019   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.213623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:28.214432   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:29.500525   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.999698   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.173272   50505 kubeadm.go:1081] duration metric: took 12.328709999s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:30.173304   50505 kubeadm.go:406] StartCluster complete in 5m9.613679996s
	I1108 00:18:30.173323   50505 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.173399   50505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:30.175022   50505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.175277   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:30.175394   50505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:30.175512   50505 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320390"
	I1108 00:18:30.175534   50505 addons.go:231] Setting addon storage-provisioner=true in "no-preload-320390"
	W1108 00:18:30.175546   50505 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:30.175591   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.175595   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:30.175648   50505 addons.go:69] Setting default-storageclass=true in profile "no-preload-320390"
	I1108 00:18:30.175669   50505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320390"
	I1108 00:18:30.175856   50505 addons.go:69] Setting metrics-server=true in profile "no-preload-320390"
	I1108 00:18:30.175880   50505 addons.go:231] Setting addon metrics-server=true in "no-preload-320390"
	W1108 00:18:30.175890   50505 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:30.175932   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.176004   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176047   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176074   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176110   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176255   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176297   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.193487   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
	I1108 00:18:30.194065   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.194643   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I1108 00:18:30.194791   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.194809   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195197   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.195244   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195454   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1108 00:18:30.195741   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.195758   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195840   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195975   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.196019   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.196254   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.196377   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.196401   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.196444   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.196747   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.197318   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.197365   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.200432   50505 addons.go:231] Setting addon default-storageclass=true in "no-preload-320390"
	W1108 00:18:30.200454   50505 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:30.200482   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.200858   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.200904   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.214840   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
	I1108 00:18:30.215335   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.215693   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.215710   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.216018   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.216163   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.216761   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I1108 00:18:30.217467   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.218005   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.218255   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.218276   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.218567   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.218686   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.218895   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I1108 00:18:30.219282   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.221453   50505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:30.219887   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.220152   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.227122   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.227187   50505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.227203   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:30.227220   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.229126   50505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:30.227716   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.230458   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231018   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.231625   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.231640   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:30.231664   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231663   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:30.231687   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.231871   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.232040   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.232130   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.232164   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.232167   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.234984   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235307   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.235327   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235589   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.235819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.236102   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.236409   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.248939   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I1108 00:18:30.249596   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.250088   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.250105   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.250535   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.250715   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.252631   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.252909   50505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.252923   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:30.252941   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.255926   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256320   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.256354   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256440   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.256639   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.256795   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.257009   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.299537   50505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-320390" context rescaled to 1 replicas
	I1108 00:18:30.299586   50505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:30.301520   50505 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:27.238758   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:27.263679   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:27.350198   50613 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:27.350271   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.350293   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=embed-certs-253253 minikube.k8s.io/updated_at=2023_11_08T00_18_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.409145   50613 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:27.761874   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.882030   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.495425   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.995764   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.495154   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.994859   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.495492   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.995328   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:31.495353   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.303227   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:30.426941   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:30.426964   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:30.450862   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.456250   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.482239   50505 node_ready.go:35] waiting up to 6m0s for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.482286   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:30.493041   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:30.493073   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:30.542548   50505 node_ready.go:49] node "no-preload-320390" has status "Ready":"True"
	I1108 00:18:30.542579   50505 node_ready.go:38] duration metric: took 60.300148ms waiting for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.542593   50505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:30.554527   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:30.554560   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:30.648882   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:30.658134   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:32.959227   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.50832393s)
	I1108 00:18:32.959242   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.502960333s)
	I1108 00:18:32.959281   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959287   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476976723s)
	I1108 00:18:32.959301   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959347   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959307   50505 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:32.959293   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959711   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959729   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959748   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959761   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959771   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959780   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959795   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959807   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.960123   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960137   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.960207   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:32.960229   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960237   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.007609   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.007641   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.007926   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.007945   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.106167   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.284838   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.626637787s)
	I1108 00:18:33.284900   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.284916   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285239   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285259   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285269   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.285278   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285579   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285612   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285626   50505 addons.go:467] Verifying addon metrics-server=true in "no-preload-320390"
	I1108 00:18:33.285579   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:33.288563   50505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:18:33.290062   50505 addons.go:502] enable addons completed in 3.114669599s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:18:30.231324   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:32.715318   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.473926   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.751140561s)
	I1108 00:18:33.473999   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:33.489630   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:33.501413   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:33.513531   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:33.513588   51228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:33.767243   51228 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:31.995169   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.494991   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.995423   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.494761   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.995099   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.494829   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.995699   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.495034   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.995563   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:36.494752   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.563227   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:37.563703   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:34.715399   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.717212   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:39.215769   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.995285   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.495447   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.995529   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.494898   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.995450   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.494831   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.994880   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:40.097031   50613 kubeadm.go:1081] duration metric: took 12.746819294s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:40.097074   50613 kubeadm.go:406] StartCluster complete in 5m13.552864243s
	I1108 00:18:40.097102   50613 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.097182   50613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:40.099232   50613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.099513   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:40.099522   50613 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:40.099603   50613 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-253253"
	I1108 00:18:40.099612   50613 addons.go:69] Setting default-storageclass=true in profile "embed-certs-253253"
	I1108 00:18:40.099625   50613 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-253253"
	I1108 00:18:40.099626   50613 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-253253"
	W1108 00:18:40.099635   50613 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:40.099675   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.099724   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:40.099769   50613 addons.go:69] Setting metrics-server=true in profile "embed-certs-253253"
	I1108 00:18:40.099783   50613 addons.go:231] Setting addon metrics-server=true in "embed-certs-253253"
	W1108 00:18:40.099791   50613 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:40.099827   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.100063   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100064   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100085   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100086   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100199   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100229   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.117281   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I1108 00:18:40.117806   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.118339   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.118364   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.118717   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.118761   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1108 00:18:40.119093   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.119311   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.119334   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.119497   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.119520   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.119668   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1108 00:18:40.119841   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.119970   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.120403   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.120436   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.120443   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.120456   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.120895   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.121048   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.123728   50613 addons.go:231] Setting addon default-storageclass=true in "embed-certs-253253"
	W1108 00:18:40.123746   50613 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:40.123774   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.124049   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.124073   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.139787   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I1108 00:18:40.140217   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.140776   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.140799   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.141358   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.143152   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I1108 00:18:40.143448   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.144341   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.145156   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.145175   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.145536   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.145695   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.146126   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.146151   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.147863   50613 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:40.149252   50613 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.149270   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:40.149288   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.149701   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41685
	I1108 00:18:40.150096   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.150599   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.150613   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.151053   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.151223   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.152047   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152462   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.152476   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152718   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.152834   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.152927   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.153008   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.153394   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.155041   50613 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:40.156603   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:40.156625   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:40.156642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.159550   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.159952   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.159973   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.160151   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.160294   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.160403   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.160505   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.162863   50613 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-253253" context rescaled to 1 replicas
	I1108 00:18:40.162890   50613 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:40.164733   50613 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:40.166082   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:40.167562   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1108 00:18:40.167938   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.168414   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.168433   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.168805   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.169056   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.170751   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.171377   50613 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.171389   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:40.171402   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.174508   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.174826   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.174859   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.175035   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.175182   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.175341   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.175467   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.387003   50613 node_ready.go:35] waiting up to 6m0s for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.387126   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:40.398413   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:40.398489   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:40.400162   50613 node_ready.go:49] node "embed-certs-253253" has status "Ready":"True"
	I1108 00:18:40.400189   50613 node_ready.go:38] duration metric: took 13.150355ms waiting for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.400204   50613 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:40.416263   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.420346   50613 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:40.441486   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.468701   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:40.468731   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:40.546438   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:40.546475   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:40.620999   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:41.963134   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.575984932s)
	I1108 00:18:41.963222   50613 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:41.963099   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.546802194s)
	I1108 00:18:41.963311   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963342   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.963771   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.963821   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.963843   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963862   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.964176   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.964202   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.964188   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.997903   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.997987   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.998341   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.998428   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.998487   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.447761   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006222409s)
	I1108 00:18:42.447810   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.447824   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.448092   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.448109   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.448110   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.448127   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.448143   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.449994   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.450013   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.450027   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.484250   50613 pod_ready.go:102] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:42.788997   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.167954058s)
	I1108 00:18:42.789042   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789342   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.789395   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789416   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789427   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789673   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789698   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789709   50613 addons.go:467] Verifying addon metrics-server=true in "embed-certs-253253"
	I1108 00:18:42.792162   50613 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:39.563860   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.565166   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:44.063902   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.216274   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:43.717636   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:45.631283   51228 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:45.631354   51228 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:45.631464   51228 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:45.631583   51228 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:45.631736   51228 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:45.631848   51228 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:45.633488   51228 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:45.633579   51228 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:45.633656   51228 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:45.633756   51228 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:45.633840   51228 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:45.633947   51228 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:45.634041   51228 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:45.634140   51228 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:45.634244   51228 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:45.634357   51228 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:45.634458   51228 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:45.634541   51228 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:45.634625   51228 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:45.634713   51228 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:45.634781   51228 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:45.634865   51228 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:45.634935   51228 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:45.635044   51228 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:45.635133   51228 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:45.636666   51228 out.go:204]   - Booting up control plane ...
	I1108 00:18:45.636755   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:45.636862   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:45.636939   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:45.637065   51228 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:45.637164   51228 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:45.637221   51228 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:45.637410   51228 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:45.637479   51228 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005347 seconds
	I1108 00:18:45.637583   51228 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:45.637710   51228 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:45.637782   51228 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:45.637961   51228 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-039263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:45.638007   51228 kubeadm.go:322] [bootstrap-token] Using token: ub1ww5.kh6zrwfrcg8jc9rc
	I1108 00:18:45.639491   51228 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:45.639627   51228 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:45.639743   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:45.639918   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:45.640060   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:45.640240   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:45.640344   51228 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:45.640487   51228 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:45.640546   51228 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:45.640625   51228 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:45.640643   51228 kubeadm.go:322] 
	I1108 00:18:45.640726   51228 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:45.640737   51228 kubeadm.go:322] 
	I1108 00:18:45.640850   51228 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:45.640860   51228 kubeadm.go:322] 
	I1108 00:18:45.640891   51228 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:45.640968   51228 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:45.641042   51228 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:45.641048   51228 kubeadm.go:322] 
	I1108 00:18:45.641124   51228 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:45.641137   51228 kubeadm.go:322] 
	I1108 00:18:45.641193   51228 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:45.641204   51228 kubeadm.go:322] 
	I1108 00:18:45.641266   51228 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:45.641372   51228 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:45.641485   51228 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:45.641493   51228 kubeadm.go:322] 
	I1108 00:18:45.641589   51228 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:45.641704   51228 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:45.641714   51228 kubeadm.go:322] 
	I1108 00:18:45.641815   51228 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.641939   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:45.641971   51228 kubeadm.go:322] 	--control-plane 
	I1108 00:18:45.641979   51228 kubeadm.go:322] 
	I1108 00:18:45.642084   51228 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:45.642093   51228 kubeadm.go:322] 
	I1108 00:18:45.642216   51228 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.642356   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:45.642372   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:18:45.642379   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:45.644712   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:45.646211   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:45.672621   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:45.700061   51228 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:45.700142   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.700153   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=default-k8s-diff-port-039263 minikube.k8s.io/updated_at=2023_11_08T00_18_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.805900   51228 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:42.794167   50613 addons.go:502] enable addons completed in 2.694639707s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:18:44.953906   50613 pod_ready.go:92] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.953928   50613 pod_ready.go:81] duration metric: took 4.533558234s waiting for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.953936   50613 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958854   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.958880   50613 pod_ready.go:81] duration metric: took 4.937561ms waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958892   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964282   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.964305   50613 pod_ready.go:81] duration metric: took 5.40486ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964317   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969544   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.969561   50613 pod_ready.go:81] duration metric: took 5.237377ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969568   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974340   50613 pod_ready.go:92] pod "kube-proxy-shp9z" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.974357   50613 pod_ready.go:81] duration metric: took 4.78369ms waiting for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974367   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350442   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.350465   50613 pod_ready.go:81] duration metric: took 376.091394ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350473   50613 pod_ready.go:38] duration metric: took 4.950259719s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.350487   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.350529   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.366477   50613 api_server.go:72] duration metric: took 5.203563902s to wait for apiserver process to appear ...
	I1108 00:18:45.366502   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.366519   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:18:45.375074   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:18:45.376646   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.376666   50613 api_server.go:131] duration metric: took 10.158963ms to wait for apiserver health ...
	I1108 00:18:45.376674   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.554560   50613 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.554598   50613 system_pods.go:61] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.554605   50613 system_pods.go:61] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.554611   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.554618   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.554624   50613 system_pods.go:61] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.554635   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.554655   50613 system_pods.go:61] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.554697   50613 system_pods.go:61] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.554712   50613 system_pods.go:74] duration metric: took 178.032339ms to wait for pod list to return data ...
	I1108 00:18:45.554722   50613 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.750181   50613 default_sa.go:45] found service account: "default"
	I1108 00:18:45.750210   50613 default_sa.go:55] duration metric: took 195.480878ms for default service account to be created ...
	I1108 00:18:45.750220   50613 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.953261   50613 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.953303   50613 system_pods.go:89] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.953312   50613 system_pods.go:89] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.953320   50613 system_pods.go:89] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.953329   50613 system_pods.go:89] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.953348   50613 system_pods.go:89] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.953360   50613 system_pods.go:89] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.953375   50613 system_pods.go:89] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.953387   50613 system_pods.go:89] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.953402   50613 system_pods.go:126] duration metric: took 203.174777ms to wait for k8s-apps to be running ...
	I1108 00:18:45.953414   50613 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.953471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.969669   50613 system_svc.go:56] duration metric: took 16.24852ms WaitForService to wait for kubelet.
	I1108 00:18:45.969698   50613 kubeadm.go:581] duration metric: took 5.806787278s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.969720   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.150807   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.150839   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.150853   50613 node_conditions.go:105] duration metric: took 181.127043ms to run NodePressure ...
	I1108 00:18:46.150866   50613 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.150876   50613 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.150886   50613 start.go:242] writing updated cluster config ...
	I1108 00:18:46.151185   50613 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.209047   50613 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.211074   50613 out.go:177] * Done! kubectl is now configured to use "embed-certs-253253" cluster and "default" namespace by default
	I1108 00:18:44.564102   50505 pod_ready.go:97] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerS
tateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564132   50505 pod_ready.go:81] duration metric: took 13.91522436s waiting for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:44.564147   50505 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runni
ng:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564158   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573431   50505 pod_ready.go:92] pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.573462   50505 pod_ready.go:81] duration metric: took 9.295648ms waiting for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573473   50505 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580792   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.580828   50505 pod_ready.go:81] duration metric: took 7.346504ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580840   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587095   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.587117   50505 pod_ready.go:81] duration metric: took 6.268891ms waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587130   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594022   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.594039   50505 pod_ready.go:81] duration metric: took 6.901477ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594052   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960144   50505 pod_ready.go:92] pod "kube-proxy-m6k8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.960162   50505 pod_ready.go:81] duration metric: took 366.102529ms waiting for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960173   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361366   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.361388   50505 pod_ready.go:81] duration metric: took 401.208779ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361396   50505 pod_ready.go:38] duration metric: took 14.818791823s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.361408   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.361453   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.377632   50505 api_server.go:72] duration metric: took 15.078013421s to wait for apiserver process to appear ...
	I1108 00:18:45.377656   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.377673   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:18:45.383912   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:18:45.385131   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.385153   50505 api_server.go:131] duration metric: took 7.489916ms to wait for apiserver health ...
	I1108 00:18:45.385163   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.565081   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.565112   50505 system_pods.go:61] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.565120   50505 system_pods.go:61] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.565127   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.565134   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.565141   50505 system_pods.go:61] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.565149   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.565157   50505 system_pods.go:61] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.565171   50505 system_pods.go:61] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.565185   50505 system_pods.go:74] duration metric: took 180.015317ms to wait for pod list to return data ...
	I1108 00:18:45.565196   50505 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.760190   50505 default_sa.go:45] found service account: "default"
	I1108 00:18:45.760217   50505 default_sa.go:55] duration metric: took 195.014175ms for default service account to be created ...
	I1108 00:18:45.760227   50505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.966186   50505 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.966223   50505 system_pods.go:89] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.966231   50505 system_pods.go:89] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.966239   50505 system_pods.go:89] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.966245   50505 system_pods.go:89] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.966252   50505 system_pods.go:89] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.966259   50505 system_pods.go:89] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.966268   50505 system_pods.go:89] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.966279   50505 system_pods.go:89] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.966294   50505 system_pods.go:126] duration metric: took 206.05956ms to wait for k8s-apps to be running ...
	I1108 00:18:45.966305   50505 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.966355   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.984753   50505 system_svc.go:56] duration metric: took 18.427005ms WaitForService to wait for kubelet.
	I1108 00:18:45.984781   50505 kubeadm.go:581] duration metric: took 15.685164805s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.984803   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.159568   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.159602   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.159615   50505 node_conditions.go:105] duration metric: took 174.805156ms to run NodePressure ...
	I1108 00:18:46.159627   50505 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.159636   50505 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.159649   50505 start.go:242] writing updated cluster config ...
	I1108 00:18:46.159934   50505 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.220234   50505 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.222217   50505 out.go:177] * Done! kubectl is now configured to use "no-preload-320390" cluster and "default" namespace by default
	I1108 00:18:46.222047   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:48.714709   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:46.109921   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.223968   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.849987   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.349982   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.850871   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.350081   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.850494   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.350809   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.850515   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.350227   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.850044   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.714976   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:53.214612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:51.350594   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:51.850705   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.349971   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.850530   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.350696   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.850039   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.350523   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.849805   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.350560   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.849890   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.350679   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.849863   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.350004   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.850463   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.349999   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.850810   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.958213   51228 kubeadm.go:1081] duration metric: took 13.258132625s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:58.958253   51228 kubeadm.go:406] StartCluster complete in 5m8.559036824s
	I1108 00:18:58.958281   51228 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.958371   51228 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:58.960083   51228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.960306   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:58.960417   51228 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:58.960497   51228 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960505   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:58.960517   51228 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960544   51228 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-039263"
	I1108 00:18:58.960521   51228 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-039263"
	I1108 00:18:58.960538   51228 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960588   51228 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.960607   51228 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:58.960654   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	W1108 00:18:58.960566   51228 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:58.960732   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.961043   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961079   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961112   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961115   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961155   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961164   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.980365   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I1108 00:18:58.980386   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I1108 00:18:58.980512   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I1108 00:18:58.980860   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980912   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980863   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.981328   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981350   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981466   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981477   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981483   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981863   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.982023   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:58.982419   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982429   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982447   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.982464   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.985852   51228 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.985875   51228 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:58.985902   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.986359   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.986390   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.996161   51228 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-039263" context rescaled to 1 replicas
	I1108 00:18:58.996200   51228 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:58.998257   51228 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:58.999857   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:58.999917   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I1108 00:18:58.998777   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1108 00:18:59.000380   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001040   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001093   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001205   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001478   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.001674   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001690   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001762   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.002038   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.002209   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.003822   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006057   51228 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:59.004254   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006174   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I1108 00:18:59.007678   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:59.007688   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:59.007706   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.009545   51228 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:55.714548   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:57.715173   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:59.007989   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.010470   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.010632   51228 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.010640   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:59.010653   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.011015   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.011039   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.011227   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.011250   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.011650   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.011657   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.012158   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:59.012188   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:59.012671   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.012805   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.012925   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.013938   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014329   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.014348   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014493   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.014645   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.014770   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.014879   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.030160   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I1108 00:18:59.030558   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.031087   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.031101   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.031353   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.031558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.033203   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.033540   51228 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.033556   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:59.033573   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.036749   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.037177   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.037551   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.037684   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.037791   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.349254   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.451588   51228 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.451664   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:59.464584   51228 node_ready.go:49] node "default-k8s-diff-port-039263" has status "Ready":"True"
	I1108 00:18:59.464616   51228 node_ready.go:38] duration metric: took 12.97792ms waiting for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.464629   51228 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:59.475428   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:59.481740   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.483627   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:59.483644   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:59.599214   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:59.599244   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:59.661512   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:59.661537   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:59.726775   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:01.455332   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.003642063s)
	I1108 00:19:01.455368   51228 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1108 00:19:01.455575   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.106281369s)
	I1108 00:19:01.455635   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.455659   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.455957   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456004   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456026   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.456048   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.456296   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456332   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456339   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.485941   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.485970   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.486229   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.486287   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.486294   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.599500   51228 pod_ready.go:102] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:01.893463   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.411687372s)
	I1108 00:19:01.893518   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893530   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.893844   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.893887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.893904   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.893918   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893928   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.894199   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.894215   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.421714   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694889947s)
	I1108 00:19:02.421768   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.421785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422098   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422123   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422141   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.422160   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422138   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422467   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422480   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422492   51228 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-039263"
	I1108 00:19:02.424446   51228 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:59.715708   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.214990   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.426041   51228 addons.go:502] enable addons completed in 3.465624772s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:19:02.549025   51228 pod_ready.go:97] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:
2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549056   51228 pod_ready.go:81] duration metric: took 3.073604936s waiting for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:02.549069   51228 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&Conta
inerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549076   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096421   51228 pod_ready.go:92] pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.096449   51228 pod_ready.go:81] duration metric: took 547.365037ms waiting for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096461   51228 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104473   51228 pod_ready.go:92] pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.104497   51228 pod_ready.go:81] duration metric: took 8.028055ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104509   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108940   51228 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.108965   51228 pod_ready.go:81] duration metric: took 4.447315ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108976   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458803   51228 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.458831   51228 pod_ready.go:81] duration metric: took 349.845574ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458844   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256435   51228 pod_ready.go:92] pod "kube-proxy-rhdhg" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.256457   51228 pod_ready.go:81] duration metric: took 797.605956ms waiting for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256466   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655727   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.655750   51228 pod_ready.go:81] duration metric: took 399.277263ms waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655758   51228 pod_ready.go:38] duration metric: took 5.191103655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:04.655772   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:19:04.655823   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:19:04.671030   51228 api_server.go:72] duration metric: took 5.674798555s to wait for apiserver process to appear ...
	I1108 00:19:04.671059   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:19:04.671076   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:19:04.677315   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:19:04.678430   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:19:04.678451   51228 api_server.go:131] duration metric: took 7.384898ms to wait for apiserver health ...
	I1108 00:19:04.678457   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:19:04.866585   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:19:04.866617   51228 system_pods.go:61] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:04.866622   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:04.866626   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:04.866631   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:04.866635   51228 system_pods.go:61] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:04.866639   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:04.866666   51228 system_pods.go:61] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:04.866676   51228 system_pods.go:61] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:04.866684   51228 system_pods.go:74] duration metric: took 188.222131ms to wait for pod list to return data ...
	I1108 00:19:04.866691   51228 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:19:05.056224   51228 default_sa.go:45] found service account: "default"
	I1108 00:19:05.056251   51228 default_sa.go:55] duration metric: took 189.551289ms for default service account to be created ...
	I1108 00:19:05.056263   51228 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:19:05.259774   51228 system_pods.go:86] 8 kube-system pods found
	I1108 00:19:05.259800   51228 system_pods.go:89] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:05.259805   51228 system_pods.go:89] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:05.259810   51228 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:05.259814   51228 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:05.259818   51228 system_pods.go:89] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:05.259822   51228 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:05.259828   51228 system_pods.go:89] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:05.259832   51228 system_pods.go:89] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:05.259840   51228 system_pods.go:126] duration metric: took 203.572791ms to wait for k8s-apps to be running ...
	I1108 00:19:05.259846   51228 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:19:05.259889   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:05.274254   51228 system_svc.go:56] duration metric: took 14.400341ms WaitForService to wait for kubelet.
	I1108 00:19:05.274277   51228 kubeadm.go:581] duration metric: took 6.278053459s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:19:05.274304   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:19:05.457057   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:19:05.457086   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:19:05.457097   51228 node_conditions.go:105] duration metric: took 182.787127ms to run NodePressure ...
	I1108 00:19:05.457107   51228 start.go:228] waiting for startup goroutines ...
	I1108 00:19:05.457113   51228 start.go:233] waiting for cluster config update ...
	I1108 00:19:05.457122   51228 start.go:242] writing updated cluster config ...
	I1108 00:19:05.457358   51228 ssh_runner.go:195] Run: rm -f paused
	I1108 00:19:05.507414   51228 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:19:05.509695   51228 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-039263" cluster and "default" namespace by default
	I1108 00:19:04.715259   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:07.214815   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:09.214886   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:11.715679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:14.215690   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:16.716315   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:19.215323   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:21.715872   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:24.215543   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:26.409609   50022 pod_ready.go:81] duration metric: took 4m0.000552573s waiting for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:26.409644   50022 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:19:26.409659   50022 pod_ready.go:38] duration metric: took 4m1.201158343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:26.409684   50022 kubeadm.go:640] restartCluster took 5m11.212754497s
	W1108 00:19:26.409757   50022 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:19:26.409790   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:19:31.401367   50022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.991549602s)
	I1108 00:19:31.401473   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:31.415823   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:19:31.425384   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:19:31.435585   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:19:31.435635   50022 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1108 00:19:31.492015   50022 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1108 00:19:31.492120   50022 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:19:31.649293   50022 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:19:31.649437   50022 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:19:31.649605   50022 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:19:31.886799   50022 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:19:31.886955   50022 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:19:31.896062   50022 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1108 00:19:32.038269   50022 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:19:32.040677   50022 out.go:204]   - Generating certificates and keys ...
	I1108 00:19:32.040833   50022 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:19:32.040945   50022 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:19:32.041037   50022 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:19:32.041085   50022 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:19:32.041142   50022 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:19:32.041231   50022 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:19:32.041346   50022 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:19:32.041441   50022 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:19:32.041594   50022 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:19:32.042173   50022 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:19:32.042236   50022 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:19:32.042302   50022 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:19:32.325005   50022 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:19:32.544755   50022 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:19:32.726539   50022 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:19:32.905403   50022 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:19:32.906525   50022 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:19:32.908371   50022 out.go:204]   - Booting up control plane ...
	I1108 00:19:32.908514   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:19:32.919163   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:19:32.919256   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:19:32.919387   50022 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:19:32.928261   50022 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:19:42.937037   50022 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.006146 seconds
	I1108 00:19:42.937215   50022 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:19:42.955795   50022 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:19:43.479726   50022 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:19:43.479868   50022 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-590541 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1108 00:19:43.989897   50022 kubeadm.go:322] [bootstrap-token] Using token: rpiq38.6eoemv6ygv6ghnel
	I1108 00:19:43.991262   50022 out.go:204]   - Configuring RBAC rules ...
	I1108 00:19:43.991391   50022 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:19:44.001502   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:19:44.006931   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:19:44.012505   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:19:44.021422   50022 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:19:44.111517   50022 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:19:44.412934   50022 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:19:44.412985   50022 kubeadm.go:322] 
	I1108 00:19:44.413073   50022 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:19:44.413088   50022 kubeadm.go:322] 
	I1108 00:19:44.413186   50022 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:19:44.413196   50022 kubeadm.go:322] 
	I1108 00:19:44.413230   50022 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:19:44.413317   50022 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:19:44.413388   50022 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:19:44.413398   50022 kubeadm.go:322] 
	I1108 00:19:44.413489   50022 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:19:44.413608   50022 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:19:44.413704   50022 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:19:44.413720   50022 kubeadm.go:322] 
	I1108 00:19:44.413851   50022 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1108 00:19:44.413974   50022 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:19:44.413988   50022 kubeadm.go:322] 
	I1108 00:19:44.414090   50022 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414288   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:19:44.414337   50022 kubeadm.go:322]     --control-plane 	  
	I1108 00:19:44.414347   50022 kubeadm.go:322] 
	I1108 00:19:44.414458   50022 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:19:44.414474   50022 kubeadm.go:322] 
	I1108 00:19:44.414593   50022 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414754   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:19:44.416038   50022 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:19:44.416063   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:19:44.416073   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:19:44.417877   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:19:44.419195   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:19:44.448380   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:19:44.474228   50022 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:19:44.474339   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.474380   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=old-k8s-version-590541 minikube.k8s.io/updated_at=2023_11_08T00_19_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.739449   50022 ops.go:34] apiserver oom_adj: -16
	I1108 00:19:44.739605   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.848712   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.444347   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.944721   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.444140   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.944185   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.444342   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.944227   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.443941   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.944002   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.444440   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.943801   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.444481   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.944720   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.443857   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.943755   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.444663   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.944052   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.443917   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.943763   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.443886   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.944615   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.444156   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.944693   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.443823   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.944727   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.444188   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.943966   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.444659   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.944651   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:59.061808   50022 kubeadm.go:1081] duration metric: took 14.587519972s to wait for elevateKubeSystemPrivileges.
	I1108 00:19:59.061855   50022 kubeadm.go:406] StartCluster complete in 5m43.925088245s
	I1108 00:19:59.061878   50022 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.061962   50022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:19:59.063740   50022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.064004   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:19:59.064107   50022 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:19:59.064182   50022 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064198   50022 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064213   50022 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-590541"
	W1108 00:19:59.064222   50022 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:19:59.064224   50022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-590541"
	I1108 00:19:59.064233   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:19:59.064236   50022 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064260   50022 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:19:59.064265   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	W1108 00:19:59.064274   50022 addons.go:240] addon metrics-server should already be in state true
	I1108 00:19:59.064406   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.064720   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064757   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064761   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.064797   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.065271   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.065309   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.082041   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
	I1108 00:19:59.082534   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.083051   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.083075   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.083432   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.083970   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.084022   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.084099   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I1108 00:19:59.084222   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I1108 00:19:59.084440   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084605   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084870   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.084887   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085151   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.085174   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085248   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.085427   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.085480   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.086399   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.086442   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.090677   50022 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-590541"
	W1108 00:19:59.090700   50022 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:19:59.090728   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.091092   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.091130   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.101788   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I1108 00:19:59.102208   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.102631   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.102648   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.103029   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.103219   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.104809   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I1108 00:19:59.104937   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.106844   50022 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:19:59.105475   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.108350   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:19:59.108374   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:19:59.108403   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.108551   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I1108 00:19:59.108910   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.108930   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.109878   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.109881   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.110039   50022 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-590541" context rescaled to 1 replicas
	I1108 00:19:59.110075   50022 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:19:59.111637   50022 out.go:177] * Verifying Kubernetes components...
	I1108 00:19:59.110208   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.110398   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.113108   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.113220   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:59.113743   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.113792   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.114471   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.114510   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.115179   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.117011   50022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:19:59.115897   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.116172   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.118325   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.118358   50022 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.118370   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:19:59.118383   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.118504   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.118696   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.118854   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.120889   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121255   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.121280   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121465   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.121647   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.121783   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.121868   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.135569   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I1108 00:19:59.135977   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.136428   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.136441   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.136799   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.137027   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.138503   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.138735   50022 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.138745   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:19:59.138758   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.141494   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.141870   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.141895   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.142046   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.142248   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.142370   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.142592   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.281321   50022 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.281572   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:19:59.284783   50022 node_ready.go:49] node "old-k8s-version-590541" has status "Ready":"True"
	I1108 00:19:59.284804   50022 node_ready.go:38] duration metric: took 3.444344ms waiting for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.284830   50022 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:59.290322   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:59.290908   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:19:59.290925   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:19:59.311485   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.346809   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.350361   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:19:59.350385   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:19:59.403305   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:59.403328   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:19:59.479823   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:20:00.224554   50022 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1108 00:20:00.659427   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.347903115s)
	I1108 00:20:00.659441   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.312604515s)
	I1108 00:20:00.659501   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659533   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659536   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659549   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659834   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.659857   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.659867   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659876   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659933   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.659981   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660022   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660051   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.660062   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.660131   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.660242   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660254   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660300   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660321   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.851614   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.851637   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.851930   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.851996   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.852027   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992341   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.5124613s)
	I1108 00:20:00.992412   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992429   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.992774   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.992811   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.992830   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992841   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992854   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.993100   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.993122   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.993162   50022 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:20:00.995051   50022 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:20:00.996839   50022 addons.go:502] enable addons completed in 1.932740124s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:20:01.324759   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:03.823744   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:06.322994   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:08.822755   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:10.823247   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:12.819017   50022 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819052   50022 pod_ready.go:81] duration metric: took 13.528699598s waiting for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	E1108 00:20:12.819067   50022 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819075   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825970   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.825988   50022 pod_ready.go:81] duration metric: took 6.906077ms waiting for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825996   50022 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830826   50022 pod_ready.go:92] pod "kube-proxy-p27g4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.830843   50022 pod_ready.go:81] duration metric: took 4.841517ms waiting for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830852   50022 pod_ready.go:38] duration metric: took 13.54601076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:20:12.830866   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:20:12.830909   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:20:12.849600   50022 api_server.go:72] duration metric: took 13.739491815s to wait for apiserver process to appear ...
	I1108 00:20:12.849634   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:20:12.849653   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:20:12.856740   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:20:12.857940   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:20:12.857960   50022 api_server.go:131] duration metric: took 8.319568ms to wait for apiserver health ...
	I1108 00:20:12.857967   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:20:12.862192   50022 system_pods.go:59] 4 kube-system pods found
	I1108 00:20:12.862217   50022 system_pods.go:61] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.862222   50022 system_pods.go:61] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.862230   50022 system_pods.go:61] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.862239   50022 system_pods.go:61] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.862248   50022 system_pods.go:74] duration metric: took 4.275078ms to wait for pod list to return data ...
	I1108 00:20:12.862257   50022 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:20:12.867018   50022 default_sa.go:45] found service account: "default"
	I1108 00:20:12.867043   50022 default_sa.go:55] duration metric: took 4.778337ms for default service account to be created ...
	I1108 00:20:12.867052   50022 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:20:12.871638   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:12.871664   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.871671   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.871682   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.871688   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.871706   50022 retry.go:31] will retry after 307.408821ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.184897   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.184927   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.184944   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.184954   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.184963   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.184984   50022 retry.go:31] will retry after 301.786347ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.492026   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.492053   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.492058   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.492065   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.492070   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.492085   50022 retry.go:31] will retry after 396.219719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.893320   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.893348   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.893356   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.893366   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.893372   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.893390   50022 retry.go:31] will retry after 592.540002ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:14.490613   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:14.490638   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:14.490644   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:14.490651   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:14.490655   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:14.490670   50022 retry.go:31] will retry after 512.19038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.008506   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.008533   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.008539   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.008545   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.008586   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.008606   50022 retry.go:31] will retry after 704.779032ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.719115   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.719140   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.719145   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.719152   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.719156   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.719174   50022 retry.go:31] will retry after 892.457504ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:16.616738   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:16.616764   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:16.616770   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:16.616776   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:16.616781   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:16.616795   50022 retry.go:31] will retry after 1.107800827s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:17.729962   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:17.729989   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:17.729997   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:17.730007   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:17.730014   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:17.730032   50022 retry.go:31] will retry after 1.24176205s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:18.976866   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:18.976891   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:18.976897   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:18.976905   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:18.976910   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:18.976925   50022 retry.go:31] will retry after 1.449825188s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:20.432723   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:20.432753   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:20.432760   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:20.432770   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:20.432776   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:20.432796   50022 retry.go:31] will retry after 1.764186569s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:22.202432   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:22.202465   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:22.202473   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:22.202484   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:22.202491   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:22.202522   50022 retry.go:31] will retry after 3.392893976s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:25.600685   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:25.600712   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:25.600717   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:25.600723   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:25.600728   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:25.600743   50022 retry.go:31] will retry after 3.537590817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:29.143439   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:29.143464   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:29.143468   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:29.143475   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:29.143482   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:29.143502   50022 retry.go:31] will retry after 3.82527374s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:32.973763   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:32.973796   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:32.973804   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:32.973814   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:32.973821   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:32.973840   50022 retry.go:31] will retry after 6.225201923s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:39.204648   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:39.204682   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:39.204690   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:39.204702   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:39.204710   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:39.204729   50022 retry.go:31] will retry after 7.177772259s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:46.388992   50022 system_pods.go:86] 5 kube-system pods found
	I1108 00:20:46.389016   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:46.389022   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Pending
	I1108 00:20:46.389025   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:46.389032   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:46.389037   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:46.389052   50022 retry.go:31] will retry after 8.995080935s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:55.391202   50022 system_pods.go:86] 7 kube-system pods found
	I1108 00:20:55.391228   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:55.391233   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:20:55.391237   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:20:55.391241   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:55.391245   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Pending
	I1108 00:20:55.391252   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:55.391256   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:55.391272   50022 retry.go:31] will retry after 10.028239262s: missing components: kube-controller-manager, kube-scheduler
	I1108 00:21:05.426292   50022 system_pods.go:86] 8 kube-system pods found
	I1108 00:21:05.426317   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:21:05.426323   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:21:05.426327   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:21:05.426331   50022 system_pods.go:89] "kube-controller-manager-old-k8s-version-590541" [90563d50-3d48-4256-ae70-82a2a6d1c251] Running
	I1108 00:21:05.426335   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:21:05.426339   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Running
	I1108 00:21:05.426345   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:21:05.426349   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:21:05.426356   50022 system_pods.go:126] duration metric: took 52.559298515s to wait for k8s-apps to be running ...
	I1108 00:21:05.426363   50022 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:21:05.426403   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:21:05.443281   50022 system_svc.go:56] duration metric: took 16.903571ms WaitForService to wait for kubelet.
	I1108 00:21:05.443315   50022 kubeadm.go:581] duration metric: took 1m6.333213694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:21:05.443337   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:21:05.447040   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:21:05.447064   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:21:05.447074   50022 node_conditions.go:105] duration metric: took 3.731788ms to run NodePressure ...
	I1108 00:21:05.447083   50022 start.go:228] waiting for startup goroutines ...
	I1108 00:21:05.447089   50022 start.go:233] waiting for cluster config update ...
	I1108 00:21:05.447098   50022 start.go:242] writing updated cluster config ...
	I1108 00:21:05.447409   50022 ssh_runner.go:195] Run: rm -f paused
	I1108 00:21:05.496203   50022 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1108 00:21:05.498233   50022 out.go:177] 
	W1108 00:21:05.499660   50022 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1108 00:21:05.500985   50022 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1108 00:21:05.502464   50022 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-590541" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-08 00:13:55 UTC, ends at Wed 2023-11-08 00:30:07 UTC. --
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.139304053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403407139289502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=661f03f6-232b-4e94-be8d-db9ba627405b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.140240137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=802c68eb-459b-4a46-9567-3d9147dd8c0c name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.140317027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=802c68eb-459b-4a46-9567-3d9147dd8c0c name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.140496594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d,PodSandboxId:7676c112a35a1d2ba86064ddc0f5c70700c18e8b67ed70907aa4dfa91d0ef49f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402801679591587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23d9653-c31d-4713-be02-30b067b1b6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 574f188d,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d,PodSandboxId:70d31f118fbbd2aae7131af19a402a79bab029614903824a58b01397f3a2f100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699402801263850837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p27g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2474fe2-c0f8-42a0-b276-56ff1113cac5,},Annotations:map[string]string{io.kubernetes.container.hash: 1f4230ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb,PodSandboxId:c94a3ec1035c3cb4a0310b661854867b45d6bda9f1a0a50aba109be755c8ee85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699402800422911487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-tbfp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af8ab5b9-9401-4755-86af-663236159220,},Annotations:map[string]string{io.kubernetes.container.hash: 300a4655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189,PodSandboxId:c65c1d615c8a59dd0653646b7e1cfbaeee17f787f8067ea4f3e1bb3c53938c19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699402775795608444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdb0033a70b4c2a18dc2febf194bdbd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cc017a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54,PodSandboxId:8cae970912eac8b61cf49d014a86474988b437448577df4d3b45285d223bde9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699402774005442507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415,PodSandboxId:f2678d0a9b35be7d3215c2aaced7fc235864f613ba4cb57aa90c3a0cd60210ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699402774031829374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa,PodSandboxId:3d05ccd26595f87efc2ba3b8bda016418f12c0556864b66fc9932375f61a4dc9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699402773895990735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fb993be369e1a1142f88ada62a3c61,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7ba6a2ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=802c68eb-459b-4a46-9567-3d9147dd8c0c name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.185965274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f896f9e1-ec64-4e5f-b9b8-23064b2290cf name=/runtime.v1.RuntimeService/Version
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.186045313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f896f9e1-ec64-4e5f-b9b8-23064b2290cf name=/runtime.v1.RuntimeService/Version
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.187241065Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b56a5bbe-3daa-49e6-82a1-75a583af6a43 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.187802590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403407187744628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=b56a5bbe-3daa-49e6-82a1-75a583af6a43 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.188402655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b5d3cf83-2356-448f-a901-5f8ed4eb0ba3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.188466634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b5d3cf83-2356-448f-a901-5f8ed4eb0ba3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.188788267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d,PodSandboxId:7676c112a35a1d2ba86064ddc0f5c70700c18e8b67ed70907aa4dfa91d0ef49f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402801679591587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23d9653-c31d-4713-be02-30b067b1b6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 574f188d,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d,PodSandboxId:70d31f118fbbd2aae7131af19a402a79bab029614903824a58b01397f3a2f100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699402801263850837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p27g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2474fe2-c0f8-42a0-b276-56ff1113cac5,},Annotations:map[string]string{io.kubernetes.container.hash: 1f4230ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb,PodSandboxId:c94a3ec1035c3cb4a0310b661854867b45d6bda9f1a0a50aba109be755c8ee85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699402800422911487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-tbfp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af8ab5b9-9401-4755-86af-663236159220,},Annotations:map[string]string{io.kubernetes.container.hash: 300a4655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189,PodSandboxId:c65c1d615c8a59dd0653646b7e1cfbaeee17f787f8067ea4f3e1bb3c53938c19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699402775795608444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdb0033a70b4c2a18dc2febf194bdbd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cc017a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54,PodSandboxId:8cae970912eac8b61cf49d014a86474988b437448577df4d3b45285d223bde9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699402774005442507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415,PodSandboxId:f2678d0a9b35be7d3215c2aaced7fc235864f613ba4cb57aa90c3a0cd60210ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699402774031829374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa,PodSandboxId:3d05ccd26595f87efc2ba3b8bda016418f12c0556864b66fc9932375f61a4dc9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699402773895990735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fb993be369e1a1142f88ada62a3c61,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7ba6a2ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b5d3cf83-2356-448f-a901-5f8ed4eb0ba3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.230288369Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b1659f97-8036-4307-8496-0ac2a6c39794 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.230376784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b1659f97-8036-4307-8496-0ac2a6c39794 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.231501232Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7a57c0a7-53b7-4426-95a9-4ed98ecc219a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.232104388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403407232085808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=7a57c0a7-53b7-4426-95a9-4ed98ecc219a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.234413112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=21bc80bb-2469-400e-ba04-b76758ef1255 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.234479765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=21bc80bb-2469-400e-ba04-b76758ef1255 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.234767416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d,PodSandboxId:7676c112a35a1d2ba86064ddc0f5c70700c18e8b67ed70907aa4dfa91d0ef49f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402801679591587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23d9653-c31d-4713-be02-30b067b1b6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 574f188d,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d,PodSandboxId:70d31f118fbbd2aae7131af19a402a79bab029614903824a58b01397f3a2f100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699402801263850837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p27g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2474fe2-c0f8-42a0-b276-56ff1113cac5,},Annotations:map[string]string{io.kubernetes.container.hash: 1f4230ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb,PodSandboxId:c94a3ec1035c3cb4a0310b661854867b45d6bda9f1a0a50aba109be755c8ee85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699402800422911487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-tbfp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af8ab5b9-9401-4755-86af-663236159220,},Annotations:map[string]string{io.kubernetes.container.hash: 300a4655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189,PodSandboxId:c65c1d615c8a59dd0653646b7e1cfbaeee17f787f8067ea4f3e1bb3c53938c19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699402775795608444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdb0033a70b4c2a18dc2febf194bdbd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cc017a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54,PodSandboxId:8cae970912eac8b61cf49d014a86474988b437448577df4d3b45285d223bde9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699402774005442507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415,PodSandboxId:f2678d0a9b35be7d3215c2aaced7fc235864f613ba4cb57aa90c3a0cd60210ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699402774031829374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa,PodSandboxId:3d05ccd26595f87efc2ba3b8bda016418f12c0556864b66fc9932375f61a4dc9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699402773895990735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fb993be369e1a1142f88ada62a3c61,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7ba6a2ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=21bc80bb-2469-400e-ba04-b76758ef1255 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.271718266Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=56ffda6f-d75b-4667-b443-4787f90302e4 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.271850186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=56ffda6f-d75b-4667-b443-4787f90302e4 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.273709041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=97721c6c-719d-4772-8096-6d46cc517cf9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.274102471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403407274086808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=97721c6c-719d-4772-8096-6d46cc517cf9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.274637282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e191a6ac-d21e-4c06-9532-0df74138bb28 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.274701684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e191a6ac-d21e-4c06-9532-0df74138bb28 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:30:07 old-k8s-version-590541 crio[718]: time="2023-11-08 00:30:07.274937323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d,PodSandboxId:7676c112a35a1d2ba86064ddc0f5c70700c18e8b67ed70907aa4dfa91d0ef49f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402801679591587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23d9653-c31d-4713-be02-30b067b1b6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 574f188d,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d,PodSandboxId:70d31f118fbbd2aae7131af19a402a79bab029614903824a58b01397f3a2f100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699402801263850837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p27g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2474fe2-c0f8-42a0-b276-56ff1113cac5,},Annotations:map[string]string{io.kubernetes.container.hash: 1f4230ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb,PodSandboxId:c94a3ec1035c3cb4a0310b661854867b45d6bda9f1a0a50aba109be755c8ee85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699402800422911487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-tbfp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af8ab5b9-9401-4755-86af-663236159220,},Annotations:map[string]string{io.kubernetes.container.hash: 300a4655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189,PodSandboxId:c65c1d615c8a59dd0653646b7e1cfbaeee17f787f8067ea4f3e1bb3c53938c19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699402775795608444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdb0033a70b4c2a18dc2febf194bdbd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cc017a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54,PodSandboxId:8cae970912eac8b61cf49d014a86474988b437448577df4d3b45285d223bde9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699402774005442507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415,PodSandboxId:f2678d0a9b35be7d3215c2aaced7fc235864f613ba4cb57aa90c3a0cd60210ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699402774031829374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa,PodSandboxId:3d05ccd26595f87efc2ba3b8bda016418f12c0556864b66fc9932375f61a4dc9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699402773895990735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fb993be369e1a1142f88ada62a3c61,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7ba6a2ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e191a6ac-d21e-4c06-9532-0df74138bb28 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb87567dbf1a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   7676c112a35a1       storage-provisioner
	4ff54e527b90d       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   70d31f118fbbd       kube-proxy-p27g4
	dd93c4c016654       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   c94a3ec1035c3       coredns-5644d7b6d9-tbfp7
	7d506696340f3       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   c65c1d615c8a5       etcd-old-k8s-version-590541
	58550bb028ada       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   f2678d0a9b35b       kube-controller-manager-old-k8s-version-590541
	59c25719e59db       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   8cae970912eac       kube-scheduler-old-k8s-version-590541
	6b67fab18718e       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   3d05ccd26595f       kube-apiserver-old-k8s-version-590541
	
	* 
	* ==> coredns [dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb] <==
	* .:53
	2023-11-08T00:20:00.840Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-08T00:20:00.840Z [INFO] CoreDNS-1.6.2
	2023-11-08T00:20:00.840Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-08T00:20:35.733Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	[INFO] Reloading complete
	2023-11-08T00:20:35.751Z [INFO] 127.0.0.1:58167 - 21342 "HINFO IN 2034047240627481077.1396256950986485262. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017476798s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-590541
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-590541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=old-k8s-version-590541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T00_19_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 00:19:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:29:39 +0000   Wed, 08 Nov 2023 00:19:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:29:39 +0000   Wed, 08 Nov 2023 00:19:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:29:39 +0000   Wed, 08 Nov 2023 00:19:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:29:39 +0000   Wed, 08 Nov 2023 00:19:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.49
	  Hostname:    old-k8s-version-590541
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 ea38dbe27e1d423cb00439f981f4114c
	 System UUID:                ea38dbe2-7e1d-423c-b004-39f981f4114c
	 Boot ID:                    c6279805-6470-40f6-8b2b-2a2830f283de
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-tbfp7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-590541                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                kube-apiserver-old-k8s-version-590541             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                kube-controller-manager-old-k8s-version-590541    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                kube-proxy-p27g4                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-590541             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	  kube-system                metrics-server-74d5856cc6-b4rtb                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-590541     Node old-k8s-version-590541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-590541     Node old-k8s-version-590541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-590541     Node old-k8s-version-590541 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-590541  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov 8 00:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075823] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.825914] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.635735] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147958] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.785770] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov 8 00:14] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.129901] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.153234] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.117809] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.231016] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[ +20.122352] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +0.443718] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.765152] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.535089] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 8 00:19] systemd-fstab-generator[3195]: Ignoring "noauto" for root device
	[  +0.766169] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 8 00:20] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189] <==
	* 2023-11-08 00:19:35.931804 I | raft: 2916afbfe5f17297 became follower at term 0
	2023-11-08 00:19:35.931813 I | raft: newRaft 2916afbfe5f17297 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-08 00:19:35.931816 I | raft: 2916afbfe5f17297 became follower at term 1
	2023-11-08 00:19:35.946270 W | auth: simple token is not cryptographically signed
	2023-11-08 00:19:35.952513 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-08 00:19:35.953068 I | etcdserver: 2916afbfe5f17297 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-08 00:19:35.953910 I | etcdserver/membership: added member 2916afbfe5f17297 [https://192.168.50.49:2380] to cluster 44542e4adf58543b
	2023-11-08 00:19:35.959508 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-08 00:19:35.960079 I | embed: listening for metrics on http://192.168.50.49:2381
	2023-11-08 00:19:35.960259 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-08 00:19:36.032380 I | raft: 2916afbfe5f17297 is starting a new election at term 1
	2023-11-08 00:19:36.032462 I | raft: 2916afbfe5f17297 became candidate at term 2
	2023-11-08 00:19:36.032488 I | raft: 2916afbfe5f17297 received MsgVoteResp from 2916afbfe5f17297 at term 2
	2023-11-08 00:19:36.032508 I | raft: 2916afbfe5f17297 became leader at term 2
	2023-11-08 00:19:36.032612 I | raft: raft.node: 2916afbfe5f17297 elected leader 2916afbfe5f17297 at term 2
	2023-11-08 00:19:36.033104 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-08 00:19:36.034000 I | etcdserver: published {Name:old-k8s-version-590541 ClientURLs:[https://192.168.50.49:2379]} to cluster 44542e4adf58543b
	2023-11-08 00:19:36.034177 I | embed: ready to serve client requests
	2023-11-08 00:19:36.037399 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-08 00:19:36.037782 I | embed: ready to serve client requests
	2023-11-08 00:19:36.040507 I | embed: serving client requests on 192.168.50.49:2379
	2023-11-08 00:19:36.054612 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-08 00:19:36.054741 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-08 00:29:36.074452 I | mvcc: store.index: compact 668
	2023-11-08 00:29:36.077042 I | mvcc: finished scheduled compaction at 668 (took 2.129497ms)
	
	* 
	* ==> kernel <==
	*  00:30:07 up 16 min,  0 users,  load average: 0.01, 0.08, 0.12
	Linux old-k8s-version-590541 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa] <==
	* I1108 00:23:02.437322       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:23:02.437449       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:23:02.437505       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:23:02.437517       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:24:40.332772       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:24:40.333107       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:24:40.333291       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:24:40.333334       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:25:40.333830       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:25:40.334046       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:25:40.334093       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:25:40.334114       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:27:40.334647       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:27:40.334759       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:27:40.334838       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:27:40.334849       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:29:40.335920       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:29:40.336225       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:29:40.336372       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:29:40.336399       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415] <==
	* E1108 00:24:01.265111       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:24:15.279740       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:24:31.517988       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:24:47.282183       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:25:01.769875       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:25:19.284646       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:25:32.021942       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:25:51.286671       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:26:02.274328       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:26:23.289006       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:26:32.526063       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:26:55.290949       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:27:02.777881       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:27:27.293214       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:27:33.030011       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:27:59.295188       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:28:03.282283       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:28:31.297097       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:28:33.534511       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:29:03.299096       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:29:03.787013       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1108 00:29:34.039285       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:29:35.301315       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:30:04.291481       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:30:07.303202       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d] <==
	* W1108 00:20:01.616170       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1108 00:20:01.630337       1 node.go:135] Successfully retrieved node IP: 192.168.50.49
	I1108 00:20:01.630455       1 server_others.go:149] Using iptables Proxier.
	I1108 00:20:01.631062       1 server.go:529] Version: v1.16.0
	I1108 00:20:01.634102       1 config.go:313] Starting service config controller
	I1108 00:20:01.634257       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1108 00:20:01.635832       1 config.go:131] Starting endpoints config controller
	I1108 00:20:01.635877       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1108 00:20:01.738794       1 shared_informer.go:204] Caches are synced for service config 
	I1108 00:20:01.738942       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54] <==
	* W1108 00:19:39.325394       1 authentication.go:79] Authentication is disabled
	I1108 00:19:39.325416       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1108 00:19:39.330968       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1108 00:19:39.382778       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 00:19:39.382954       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:19:39.383088       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 00:19:39.390874       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1108 00:19:39.391026       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:19:39.391096       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:19:39.391141       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 00:19:39.391184       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:19:39.391234       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 00:19:39.391277       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 00:19:39.391323       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 00:19:40.384594       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 00:19:40.385650       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:19:40.392800       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 00:19:40.395132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1108 00:19:40.397128       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:19:40.400053       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:19:40.400415       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 00:19:40.404773       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:19:40.407133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 00:19:40.411226       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 00:19:40.412378       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 00:13:55 UTC, ends at Wed 2023-11-08 00:30:07 UTC. --
	Nov 08 00:25:32 old-k8s-version-590541 kubelet[3201]: E1108 00:25:32.649657    3201 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 08 00:25:32 old-k8s-version-590541 kubelet[3201]: E1108 00:25:32.649776    3201 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 08 00:25:32 old-k8s-version-590541 kubelet[3201]: E1108 00:25:32.649814    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 08 00:25:45 old-k8s-version-590541 kubelet[3201]: E1108 00:25:45.627132    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:25:58 old-k8s-version-590541 kubelet[3201]: E1108 00:25:58.626766    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:26:09 old-k8s-version-590541 kubelet[3201]: E1108 00:26:09.626940    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:26:24 old-k8s-version-590541 kubelet[3201]: E1108 00:26:24.626689    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:26:39 old-k8s-version-590541 kubelet[3201]: E1108 00:26:39.627369    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:26:54 old-k8s-version-590541 kubelet[3201]: E1108 00:26:54.629822    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:27:07 old-k8s-version-590541 kubelet[3201]: E1108 00:27:07.626973    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:27:18 old-k8s-version-590541 kubelet[3201]: E1108 00:27:18.627257    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:27:31 old-k8s-version-590541 kubelet[3201]: E1108 00:27:31.627239    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:27:46 old-k8s-version-590541 kubelet[3201]: E1108 00:27:46.626936    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:27:59 old-k8s-version-590541 kubelet[3201]: E1108 00:27:59.627079    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:28:11 old-k8s-version-590541 kubelet[3201]: E1108 00:28:11.626687    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:28:26 old-k8s-version-590541 kubelet[3201]: E1108 00:28:26.627005    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:28:38 old-k8s-version-590541 kubelet[3201]: E1108 00:28:38.627324    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:28:50 old-k8s-version-590541 kubelet[3201]: E1108 00:28:50.627812    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:02 old-k8s-version-590541 kubelet[3201]: E1108 00:29:02.626906    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:17 old-k8s-version-590541 kubelet[3201]: E1108 00:29:17.626963    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:28 old-k8s-version-590541 kubelet[3201]: E1108 00:29:28.626673    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:32 old-k8s-version-590541 kubelet[3201]: E1108 00:29:32.726366    3201 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Nov 08 00:29:40 old-k8s-version-590541 kubelet[3201]: E1108 00:29:40.626982    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:51 old-k8s-version-590541 kubelet[3201]: E1108 00:29:51.627025    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:30:05 old-k8s-version-590541 kubelet[3201]: E1108 00:30:05.627228    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d] <==
	* I1108 00:20:01.833277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 00:20:01.881419       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 00:20:01.881519       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 00:20:01.913318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 00:20:01.915178       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-590541_b6996c9c-33bf-475b-98b2-3062155f53de!
	I1108 00:20:01.922393       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a277f195-c4dc-42dc-b3b4-4c761e9d10cf", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-590541_b6996c9c-33bf-475b-98b2-3062155f53de became leader
	I1108 00:20:02.016915       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-590541_b6996c9c-33bf-475b-98b2-3062155f53de!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-590541 -n old-k8s-version-590541
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-590541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-b4rtb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-590541 describe pod metrics-server-74d5856cc6-b4rtb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-590541 describe pod metrics-server-74d5856cc6-b4rtb: exit status 1 (67.305447ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-b4rtb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-590541 describe pod metrics-server-74d5856cc6-b4rtb: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (469.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-253253 -n embed-certs-253253
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-08 00:35:36.244048175 +0000 UTC m=+5670.407357051
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-253253 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-253253 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.265µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-253253 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253253 -n embed-certs-253253
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-253253 logs -n 25
E1108 00:35:38.957059   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-253253 logs -n 25: (2.948859296s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:09 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-590541             | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320390                  | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-253253                 | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-039263  | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-039263       | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:12 UTC | 08 Nov 23 00:19 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:32 UTC | 08 Nov 23 00:32 UTC |
	| start   | -p newest-cni-409933 --memory=2200 --alsologtostderr   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:32 UTC | 08 Nov 23 00:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-409933             | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:33 UTC | 08 Nov 23 00:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-409933                                   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:33 UTC | 08 Nov 23 00:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-409933                  | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:33 UTC | 08 Nov 23 00:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-409933 --memory=2200 --alsologtostderr   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:33 UTC | 08 Nov 23 00:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-409933 sudo                              | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC | 08 Nov 23 00:34 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-409933                                   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC | 08 Nov 23 00:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-409933                                   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC | 08 Nov 23 00:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC | 08 Nov 23 00:34 UTC |
	| delete  | -p newest-cni-409933                                   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC | 08 Nov 23 00:34 UTC |
	| start   | -p auto-010870 --memory=3072                           | auto-010870                  | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-409933                                   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC | 08 Nov 23 00:34 UTC |
	| start   | -p kindnet-010870                                      | kindnet-010870               | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:34:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:34:33.177149   57577 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:34:33.177376   57577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:34:33.177385   57577 out.go:309] Setting ErrFile to fd 2...
	I1108 00:34:33.177389   57577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:34:33.177570   57577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:34:33.178162   57577 out.go:303] Setting JSON to false
	I1108 00:34:33.179081   57577 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8222,"bootTime":1699395451,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:34:33.179144   57577 start.go:138] virtualization: kvm guest
	I1108 00:34:33.181430   57577 out.go:177] * [kindnet-010870] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:34:33.183070   57577 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:34:33.183035   57577 notify.go:220] Checking for updates...
	I1108 00:34:33.184618   57577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:34:33.186188   57577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:34:33.187758   57577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:34:33.189526   57577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:34:33.190911   57577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:34:33.193046   57577 config.go:182] Loaded profile config "auto-010870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:34:33.193210   57577 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:34:33.193342   57577 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:34:33.193455   57577 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:34:34.174231   57577 out.go:177] * Using the kvm2 driver based on user configuration
	I1108 00:34:34.175824   57577 start.go:298] selected driver: kvm2
	I1108 00:34:34.175838   57577 start.go:902] validating driver "kvm2" against <nil>
	I1108 00:34:34.175849   57577 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:34:34.176600   57577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:34:34.176699   57577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:34:34.191836   57577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:34:34.191880   57577 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1108 00:34:34.192129   57577 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 00:34:34.192201   57577 cni.go:84] Creating CNI manager for "kindnet"
	I1108 00:34:34.192226   57577 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 00:34:34.192242   57577 start_flags.go:323] config:
	{Name:kindnet-010870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:kindnet-010870 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:34:34.192441   57577 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:34:34.194362   57577 out.go:177] * Starting control plane node kindnet-010870 in cluster kindnet-010870
	I1108 00:34:32.064668   57398 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1108 00:34:32.064909   57398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:32.064964   57398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:32.079449   57398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I1108 00:34:32.079882   57398 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:32.080492   57398 main.go:141] libmachine: Using API Version  1
	I1108 00:34:32.080513   57398 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:32.080864   57398 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:32.081092   57398 main.go:141] libmachine: (auto-010870) Calling .GetMachineName
	I1108 00:34:32.081250   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:34:32.081444   57398 start.go:159] libmachine.API.Create for "auto-010870" (driver="kvm2")
	I1108 00:34:32.081482   57398 client.go:168] LocalClient.Create starting
	I1108 00:34:32.081515   57398 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem
	I1108 00:34:32.081560   57398 main.go:141] libmachine: Decoding PEM data...
	I1108 00:34:32.081581   57398 main.go:141] libmachine: Parsing certificate...
	I1108 00:34:32.081675   57398 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem
	I1108 00:34:32.081710   57398 main.go:141] libmachine: Decoding PEM data...
	I1108 00:34:32.081733   57398 main.go:141] libmachine: Parsing certificate...
	I1108 00:34:32.081759   57398 main.go:141] libmachine: Running pre-create checks...
	I1108 00:34:32.081776   57398 main.go:141] libmachine: (auto-010870) Calling .PreCreateCheck
	I1108 00:34:32.082219   57398 main.go:141] libmachine: (auto-010870) Calling .GetConfigRaw
	I1108 00:34:32.082683   57398 main.go:141] libmachine: Creating machine...
	I1108 00:34:32.082704   57398 main.go:141] libmachine: (auto-010870) Calling .Create
	I1108 00:34:32.082842   57398 main.go:141] libmachine: (auto-010870) Creating KVM machine...
	I1108 00:34:32.133317   57398 main.go:141] libmachine: (auto-010870) DBG | found existing default KVM network
	I1108 00:34:32.135097   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:32.134907   57438 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4b:f9:48} reservation:<nil>}
	I1108 00:34:32.136368   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:32.136288   57438 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002746e0}
	I1108 00:34:32.678851   57398 main.go:141] libmachine: (auto-010870) DBG | trying to create private KVM network mk-auto-010870 192.168.50.0/24...
	I1108 00:34:32.752304   57398 main.go:141] libmachine: (auto-010870) DBG | private KVM network mk-auto-010870 192.168.50.0/24 created
	I1108 00:34:32.752339   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:32.752252   57438 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:34:32.752354   57398 main.go:141] libmachine: (auto-010870) Setting up store path in /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870 ...
	I1108 00:34:32.752375   57398 main.go:141] libmachine: (auto-010870) Building disk image from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1108 00:34:32.752455   57398 main.go:141] libmachine: (auto-010870) Downloading /home/jenkins/minikube-integration/17585-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1108 00:34:32.980116   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:32.980002   57438 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa...
	I1108 00:34:33.205919   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:33.205782   57438 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/auto-010870.rawdisk...
	I1108 00:34:33.205965   57398 main.go:141] libmachine: (auto-010870) DBG | Writing magic tar header
	I1108 00:34:33.205989   57398 main.go:141] libmachine: (auto-010870) DBG | Writing SSH key tar header
	I1108 00:34:33.206005   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:33.205903   57438 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870 ...
	I1108 00:34:33.206025   57398 main.go:141] libmachine: (auto-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870
	I1108 00:34:33.206092   57398 main.go:141] libmachine: (auto-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines
	I1108 00:34:33.206124   57398 main.go:141] libmachine: (auto-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870 (perms=drwx------)
	I1108 00:34:33.206136   57398 main.go:141] libmachine: (auto-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:34:33.206152   57398 main.go:141] libmachine: (auto-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines (perms=drwxr-xr-x)
	I1108 00:34:33.206166   57398 main.go:141] libmachine: (auto-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube (perms=drwxr-xr-x)
	I1108 00:34:33.206178   57398 main.go:141] libmachine: (auto-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647
	I1108 00:34:33.206194   57398 main.go:141] libmachine: (auto-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647 (perms=drwxrwxr-x)
	I1108 00:34:33.206217   57398 main.go:141] libmachine: (auto-010870) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1108 00:34:33.206233   57398 main.go:141] libmachine: (auto-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1108 00:34:33.206244   57398 main.go:141] libmachine: (auto-010870) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1108 00:34:33.206256   57398 main.go:141] libmachine: (auto-010870) Creating domain...
	I1108 00:34:33.206267   57398 main.go:141] libmachine: (auto-010870) DBG | Checking permissions on dir: /home/jenkins
	I1108 00:34:33.206277   57398 main.go:141] libmachine: (auto-010870) DBG | Checking permissions on dir: /home
	I1108 00:34:33.206304   57398 main.go:141] libmachine: (auto-010870) DBG | Skipping /home - not owner
	I1108 00:34:33.207528   57398 main.go:141] libmachine: (auto-010870) define libvirt domain using xml: 
	I1108 00:34:33.207549   57398 main.go:141] libmachine: (auto-010870) <domain type='kvm'>
	I1108 00:34:33.207561   57398 main.go:141] libmachine: (auto-010870)   <name>auto-010870</name>
	I1108 00:34:33.207569   57398 main.go:141] libmachine: (auto-010870)   <memory unit='MiB'>3072</memory>
	I1108 00:34:33.207580   57398 main.go:141] libmachine: (auto-010870)   <vcpu>2</vcpu>
	I1108 00:34:33.208554   57398 main.go:141] libmachine: (auto-010870)   <features>
	I1108 00:34:33.208599   57398 main.go:141] libmachine: (auto-010870)     <acpi/>
	I1108 00:34:33.208631   57398 main.go:141] libmachine: (auto-010870)     <apic/>
	I1108 00:34:33.208656   57398 main.go:141] libmachine: (auto-010870)     <pae/>
	I1108 00:34:33.208673   57398 main.go:141] libmachine: (auto-010870)     
	I1108 00:34:33.208685   57398 main.go:141] libmachine: (auto-010870)   </features>
	I1108 00:34:33.208693   57398 main.go:141] libmachine: (auto-010870)   <cpu mode='host-passthrough'>
	I1108 00:34:33.208702   57398 main.go:141] libmachine: (auto-010870)   
	I1108 00:34:33.208709   57398 main.go:141] libmachine: (auto-010870)   </cpu>
	I1108 00:34:33.208729   57398 main.go:141] libmachine: (auto-010870)   <os>
	I1108 00:34:33.208739   57398 main.go:141] libmachine: (auto-010870)     <type>hvm</type>
	I1108 00:34:33.208765   57398 main.go:141] libmachine: (auto-010870)     <boot dev='cdrom'/>
	I1108 00:34:33.208787   57398 main.go:141] libmachine: (auto-010870)     <boot dev='hd'/>
	I1108 00:34:33.208797   57398 main.go:141] libmachine: (auto-010870)     <bootmenu enable='no'/>
	I1108 00:34:33.208811   57398 main.go:141] libmachine: (auto-010870)   </os>
	I1108 00:34:33.208836   57398 main.go:141] libmachine: (auto-010870)   <devices>
	I1108 00:34:33.208855   57398 main.go:141] libmachine: (auto-010870)     <disk type='file' device='cdrom'>
	I1108 00:34:33.208873   57398 main.go:141] libmachine: (auto-010870)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/boot2docker.iso'/>
	I1108 00:34:33.208885   57398 main.go:141] libmachine: (auto-010870)       <target dev='hdc' bus='scsi'/>
	I1108 00:34:33.208902   57398 main.go:141] libmachine: (auto-010870)       <readonly/>
	I1108 00:34:33.208911   57398 main.go:141] libmachine: (auto-010870)     </disk>
	I1108 00:34:33.208940   57398 main.go:141] libmachine: (auto-010870)     <disk type='file' device='disk'>
	I1108 00:34:33.208976   57398 main.go:141] libmachine: (auto-010870)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1108 00:34:33.208996   57398 main.go:141] libmachine: (auto-010870)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/auto-010870.rawdisk'/>
	I1108 00:34:33.209009   57398 main.go:141] libmachine: (auto-010870)       <target dev='hda' bus='virtio'/>
	I1108 00:34:33.209021   57398 main.go:141] libmachine: (auto-010870)     </disk>
	I1108 00:34:33.209033   57398 main.go:141] libmachine: (auto-010870)     <interface type='network'>
	I1108 00:34:33.209053   57398 main.go:141] libmachine: (auto-010870)       <source network='mk-auto-010870'/>
	I1108 00:34:33.209068   57398 main.go:141] libmachine: (auto-010870)       <model type='virtio'/>
	I1108 00:34:33.209082   57398 main.go:141] libmachine: (auto-010870)     </interface>
	I1108 00:34:33.209096   57398 main.go:141] libmachine: (auto-010870)     <interface type='network'>
	I1108 00:34:33.209110   57398 main.go:141] libmachine: (auto-010870)       <source network='default'/>
	I1108 00:34:33.209126   57398 main.go:141] libmachine: (auto-010870)       <model type='virtio'/>
	I1108 00:34:33.209141   57398 main.go:141] libmachine: (auto-010870)     </interface>
	I1108 00:34:33.209154   57398 main.go:141] libmachine: (auto-010870)     <serial type='pty'>
	I1108 00:34:33.209169   57398 main.go:141] libmachine: (auto-010870)       <target port='0'/>
	I1108 00:34:33.209181   57398 main.go:141] libmachine: (auto-010870)     </serial>
	I1108 00:34:33.209201   57398 main.go:141] libmachine: (auto-010870)     <console type='pty'>
	I1108 00:34:33.209216   57398 main.go:141] libmachine: (auto-010870)       <target type='serial' port='0'/>
	I1108 00:34:33.209225   57398 main.go:141] libmachine: (auto-010870)     </console>
	I1108 00:34:33.209238   57398 main.go:141] libmachine: (auto-010870)     <rng model='virtio'>
	I1108 00:34:33.209254   57398 main.go:141] libmachine: (auto-010870)       <backend model='random'>/dev/random</backend>
	I1108 00:34:33.209265   57398 main.go:141] libmachine: (auto-010870)     </rng>
	I1108 00:34:33.209274   57398 main.go:141] libmachine: (auto-010870)     
	I1108 00:34:33.209285   57398 main.go:141] libmachine: (auto-010870)     
	I1108 00:34:33.209302   57398 main.go:141] libmachine: (auto-010870)   </devices>
	I1108 00:34:33.209316   57398 main.go:141] libmachine: (auto-010870) </domain>
	I1108 00:34:33.209331   57398 main.go:141] libmachine: (auto-010870) 
	I1108 00:34:33.216325   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:ee:24:6c in network default
	I1108 00:34:33.217057   57398 main.go:141] libmachine: (auto-010870) Ensuring networks are active...
	I1108 00:34:33.217091   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:33.217859   57398 main.go:141] libmachine: (auto-010870) Ensuring network default is active
	I1108 00:34:33.218197   57398 main.go:141] libmachine: (auto-010870) Ensuring network mk-auto-010870 is active
	I1108 00:34:33.218767   57398 main.go:141] libmachine: (auto-010870) Getting domain xml...
	I1108 00:34:33.219604   57398 main.go:141] libmachine: (auto-010870) Creating domain...
	I1108 00:34:34.466499   57398 main.go:141] libmachine: (auto-010870) Waiting to get IP...
	I1108 00:34:34.467277   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:34.467738   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:34.467777   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:34.467714   57438 retry.go:31] will retry after 215.367776ms: waiting for machine to come up
	I1108 00:34:34.685201   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:34.685722   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:34.685756   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:34.685700   57438 retry.go:31] will retry after 334.891933ms: waiting for machine to come up
	I1108 00:34:35.022273   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:35.022805   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:35.022858   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:35.022734   57438 retry.go:31] will retry after 485.427658ms: waiting for machine to come up
	I1108 00:34:35.509352   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:35.509828   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:35.509860   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:35.509778   57438 retry.go:31] will retry after 511.540802ms: waiting for machine to come up
	I1108 00:34:36.022885   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:36.023387   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:36.023419   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:36.023318   57438 retry.go:31] will retry after 700.866963ms: waiting for machine to come up
	I1108 00:34:36.726284   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:36.726736   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:36.726775   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:36.726698   57438 retry.go:31] will retry after 583.930536ms: waiting for machine to come up
	I1108 00:34:34.195786   57577 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:34:34.195821   57577 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1108 00:34:34.195831   57577 cache.go:56] Caching tarball of preloaded images
	I1108 00:34:34.195917   57577 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 00:34:34.195927   57577 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1108 00:34:34.196026   57577 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/kindnet-010870/config.json ...
	I1108 00:34:34.196043   57577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/kindnet-010870/config.json: {Name:mk6d79c878db3ec25bef390677e97fe763f37034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:34:34.196158   57577 start.go:365] acquiring machines lock for kindnet-010870: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:34:37.311949   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:37.312509   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:37.312537   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:37.312457   57438 retry.go:31] will retry after 838.211706ms: waiting for machine to come up
	I1108 00:34:38.152577   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:38.153041   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:38.153093   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:38.153017   57438 retry.go:31] will retry after 992.61157ms: waiting for machine to come up
	I1108 00:34:39.147262   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:39.147707   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:39.147740   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:39.147652   57438 retry.go:31] will retry after 1.717386213s: waiting for machine to come up
	I1108 00:34:40.867660   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:40.868172   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:40.868204   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:40.868119   57438 retry.go:31] will retry after 1.479628992s: waiting for machine to come up
	I1108 00:34:42.349372   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:42.349803   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:42.349837   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:42.349760   57438 retry.go:31] will retry after 2.597026266s: waiting for machine to come up
	I1108 00:34:44.949574   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:44.950038   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:44.950071   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:44.949978   57438 retry.go:31] will retry after 3.466460602s: waiting for machine to come up
	I1108 00:34:48.417498   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:48.417982   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:48.417999   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:48.417951   57438 retry.go:31] will retry after 4.545575261s: waiting for machine to come up
	I1108 00:34:52.965355   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:52.965783   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find current IP address of domain auto-010870 in network mk-auto-010870
	I1108 00:34:52.965815   57398 main.go:141] libmachine: (auto-010870) DBG | I1108 00:34:52.965698   57438 retry.go:31] will retry after 3.750707025s: waiting for machine to come up
	I1108 00:34:56.720559   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:56.721169   57398 main.go:141] libmachine: (auto-010870) Found IP for machine: 192.168.50.47
	I1108 00:34:56.721200   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has current primary IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:56.721210   57398 main.go:141] libmachine: (auto-010870) Reserving static IP address...
	I1108 00:34:56.721588   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find host DHCP lease matching {name: "auto-010870", mac: "52:54:00:89:05:5f", ip: "192.168.50.47"} in network mk-auto-010870
	I1108 00:34:56.793926   57398 main.go:141] libmachine: (auto-010870) DBG | Getting to WaitForSSH function...
	I1108 00:34:56.793960   57398 main.go:141] libmachine: (auto-010870) Reserved static IP address: 192.168.50.47
	I1108 00:34:56.793975   57398 main.go:141] libmachine: (auto-010870) Waiting for SSH to be available...
	I1108 00:34:56.796581   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:56.796872   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870
	I1108 00:34:56.796896   57398 main.go:141] libmachine: (auto-010870) DBG | unable to find defined IP address of network mk-auto-010870 interface with MAC address 52:54:00:89:05:5f
	I1108 00:34:56.796972   57398 main.go:141] libmachine: (auto-010870) DBG | Using SSH client type: external
	I1108 00:34:56.796998   57398 main.go:141] libmachine: (auto-010870) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa (-rw-------)
	I1108 00:34:56.797065   57398 main.go:141] libmachine: (auto-010870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:34:56.797083   57398 main.go:141] libmachine: (auto-010870) DBG | About to run SSH command:
	I1108 00:34:56.797101   57398 main.go:141] libmachine: (auto-010870) DBG | exit 0
	I1108 00:34:56.800738   57398 main.go:141] libmachine: (auto-010870) DBG | SSH cmd err, output: exit status 255: 
	I1108 00:34:56.800765   57398 main.go:141] libmachine: (auto-010870) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1108 00:34:56.800778   57398 main.go:141] libmachine: (auto-010870) DBG | command : exit 0
	I1108 00:34:56.800785   57398 main.go:141] libmachine: (auto-010870) DBG | err     : exit status 255
	I1108 00:34:56.800793   57398 main.go:141] libmachine: (auto-010870) DBG | output  : 
	I1108 00:35:01.469698   57577 start.go:369] acquired machines lock for "kindnet-010870" in 27.273496156s
	I1108 00:35:01.469762   57577 start.go:93] Provisioning new machine with config: &{Name:kindnet-010870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:kindnet-01087
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:35:01.469866   57577 start.go:125] createHost starting for "" (driver="kvm2")
	I1108 00:34:59.801277   57398 main.go:141] libmachine: (auto-010870) DBG | Getting to WaitForSSH function...
	I1108 00:34:59.803882   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:59.804279   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:34:59.804313   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:59.804466   57398 main.go:141] libmachine: (auto-010870) DBG | Using SSH client type: external
	I1108 00:34:59.804496   57398 main.go:141] libmachine: (auto-010870) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa (-rw-------)
	I1108 00:34:59.804527   57398 main.go:141] libmachine: (auto-010870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:34:59.804542   57398 main.go:141] libmachine: (auto-010870) DBG | About to run SSH command:
	I1108 00:34:59.804559   57398 main.go:141] libmachine: (auto-010870) DBG | exit 0
	I1108 00:34:59.940730   57398 main.go:141] libmachine: (auto-010870) DBG | SSH cmd err, output: <nil>: 
	I1108 00:34:59.941008   57398 main.go:141] libmachine: (auto-010870) KVM machine creation complete!
	I1108 00:34:59.941318   57398 main.go:141] libmachine: (auto-010870) Calling .GetConfigRaw
	I1108 00:34:59.941863   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:34:59.942046   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:34:59.942194   57398 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1108 00:34:59.942206   57398 main.go:141] libmachine: (auto-010870) Calling .GetState
	I1108 00:34:59.943444   57398 main.go:141] libmachine: Detecting operating system of created instance...
	I1108 00:34:59.943461   57398 main.go:141] libmachine: Waiting for SSH to be available...
	I1108 00:34:59.943478   57398 main.go:141] libmachine: Getting to WaitForSSH function...
	I1108 00:34:59.943489   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:34:59.945989   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:59.946403   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:34:59.946431   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:34:59.946545   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:34:59.946692   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:34:59.946838   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:34:59.946989   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:34:59.947139   57398 main.go:141] libmachine: Using SSH client type: native
	I1108 00:34:59.947469   57398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1108 00:34:59.947482   57398 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1108 00:35:00.068087   57398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:35:00.068115   57398 main.go:141] libmachine: Detecting the provisioner...
	I1108 00:35:00.068127   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:00.070978   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.071312   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:00.071344   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.071510   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:00.071685   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:00.071849   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:00.072001   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:00.072174   57398 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:00.072529   57398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1108 00:35:00.072544   57398 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1108 00:35:00.193468   57398 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb75713b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1108 00:35:00.193513   57398 main.go:141] libmachine: found compatible host: buildroot
	I1108 00:35:00.193520   57398 main.go:141] libmachine: Provisioning with buildroot...
	I1108 00:35:00.193528   57398 main.go:141] libmachine: (auto-010870) Calling .GetMachineName
	I1108 00:35:00.193766   57398 buildroot.go:166] provisioning hostname "auto-010870"
	I1108 00:35:00.193795   57398 main.go:141] libmachine: (auto-010870) Calling .GetMachineName
	I1108 00:35:00.193965   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:00.196511   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.196842   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:00.196872   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.196986   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:00.197165   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:00.197320   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:00.197475   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:00.197648   57398 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:00.198035   57398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1108 00:35:00.198048   57398 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-010870 && echo "auto-010870" | sudo tee /etc/hostname
	I1108 00:35:00.334522   57398 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-010870
	
	I1108 00:35:00.334567   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:00.337360   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.337663   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:00.337694   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.337824   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:00.337983   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:00.338096   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:00.338219   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:00.338371   57398 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:00.338726   57398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1108 00:35:00.338743   57398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-010870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-010870/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-010870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:35:00.469694   57398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:35:00.469724   57398 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:35:00.469753   57398 buildroot.go:174] setting up certificates
	I1108 00:35:00.469764   57398 provision.go:83] configureAuth start
	I1108 00:35:00.469776   57398 main.go:141] libmachine: (auto-010870) Calling .GetMachineName
	I1108 00:35:00.470061   57398 main.go:141] libmachine: (auto-010870) Calling .GetIP
	I1108 00:35:00.472754   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.473257   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:00.473292   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.473431   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:00.475862   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.476230   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:00.476259   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.476383   57398 provision.go:138] copyHostCerts
	I1108 00:35:00.476443   57398 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:35:00.476464   57398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:35:00.476546   57398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:35:00.476642   57398 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:35:00.476654   57398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:35:00.476692   57398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:35:00.476776   57398 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:35:00.476787   57398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:35:00.476837   57398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:35:00.476900   57398 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.auto-010870 san=[192.168.50.47 192.168.50.47 localhost 127.0.0.1 minikube auto-010870]
	I1108 00:35:00.714399   57398 provision.go:172] copyRemoteCerts
	I1108 00:35:00.714459   57398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:35:00.714488   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:00.717106   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.717415   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:00.717449   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.717571   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:00.717749   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:00.717866   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:00.717977   57398 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa Username:docker}
	I1108 00:35:00.810680   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:35:00.835055   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1108 00:35:00.858552   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:35:00.881981   57398 provision.go:86] duration metric: configureAuth took 412.204684ms
	I1108 00:35:00.882005   57398 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:35:00.882186   57398 config.go:182] Loaded profile config "auto-010870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:35:00.882266   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:00.885114   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.885591   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:00.885627   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:00.885786   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:00.886001   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:00.886158   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:00.886303   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:00.886452   57398 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:00.886915   57398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1108 00:35:00.886945   57398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:35:01.202719   57398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:35:01.202744   57398 main.go:141] libmachine: Checking connection to Docker...
	I1108 00:35:01.202752   57398 main.go:141] libmachine: (auto-010870) Calling .GetURL
	I1108 00:35:01.203896   57398 main.go:141] libmachine: (auto-010870) DBG | Using libvirt version 6000000
	I1108 00:35:01.206432   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.206764   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:01.206787   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.206989   57398 main.go:141] libmachine: Docker is up and running!
	I1108 00:35:01.207007   57398 main.go:141] libmachine: Reticulating splines...
	I1108 00:35:01.207016   57398 client.go:171] LocalClient.Create took 29.125522539s
	I1108 00:35:01.207047   57398 start.go:167] duration metric: libmachine.API.Create for "auto-010870" took 29.125603584s
	I1108 00:35:01.207059   57398 start.go:300] post-start starting for "auto-010870" (driver="kvm2")
	I1108 00:35:01.207082   57398 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:35:01.207105   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:35:01.207349   57398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:35:01.207379   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:01.209582   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.209918   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:01.209944   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.210098   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:01.210301   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:01.210475   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:01.210623   57398 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa Username:docker}
	I1108 00:35:01.303291   57398 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:35:01.307588   57398 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:35:01.307616   57398 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:35:01.307684   57398 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:35:01.307781   57398 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:35:01.307899   57398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:35:01.317432   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:35:01.339602   57398 start.go:303] post-start completed in 132.532759ms
	I1108 00:35:01.339641   57398 main.go:141] libmachine: (auto-010870) Calling .GetConfigRaw
	I1108 00:35:01.340159   57398 main.go:141] libmachine: (auto-010870) Calling .GetIP
	I1108 00:35:01.342579   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.342953   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:01.342984   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.343187   57398 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/config.json ...
	I1108 00:35:01.343376   57398 start.go:128] duration metric: createHost completed in 29.280555973s
	I1108 00:35:01.343401   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:01.345501   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.345776   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:01.345797   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.345980   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:01.346173   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:01.346332   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:01.346461   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:01.346662   57398 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:01.347143   57398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1108 00:35:01.347159   57398 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:35:01.469537   57398 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699403701.458750716
	
	I1108 00:35:01.469564   57398 fix.go:206] guest clock: 1699403701.458750716
	I1108 00:35:01.469573   57398 fix.go:219] Guest: 2023-11-08 00:35:01.458750716 +0000 UTC Remote: 2023-11-08 00:35:01.343387111 +0000 UTC m=+29.421606300 (delta=115.363605ms)
	I1108 00:35:01.469602   57398 fix.go:190] guest clock delta is within tolerance: 115.363605ms
	I1108 00:35:01.469610   57398 start.go:83] releasing machines lock for "auto-010870", held for 29.406883332s
	I1108 00:35:01.469636   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:35:01.469918   57398 main.go:141] libmachine: (auto-010870) Calling .GetIP
	I1108 00:35:01.472612   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.472984   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:01.473013   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.473189   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:35:01.473684   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:35:01.473849   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:35:01.473937   57398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:35:01.473975   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:01.474076   57398 ssh_runner.go:195] Run: cat /version.json
	I1108 00:35:01.474120   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:01.476638   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.476953   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:01.476987   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.477007   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.477121   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:01.477286   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:01.477375   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:01.477405   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:01.477556   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:01.477571   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:01.477754   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:01.477754   57398 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa Username:docker}
	I1108 00:35:01.477938   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:01.478078   57398 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa Username:docker}
	I1108 00:35:01.586714   57398 ssh_runner.go:195] Run: systemctl --version
	I1108 00:35:01.593416   57398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:35:01.756117   57398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:35:01.762317   57398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:35:01.762423   57398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:35:01.778133   57398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:35:01.778158   57398 start.go:472] detecting cgroup driver to use...
	I1108 00:35:01.778220   57398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:35:01.793805   57398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:35:01.806324   57398 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:35:01.806388   57398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:35:01.818727   57398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:35:01.831047   57398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:35:01.949694   57398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:35:02.085144   57398 docker.go:219] disabling docker service ...
	I1108 00:35:02.085226   57398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:35:02.100080   57398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:35:02.112346   57398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:35:02.237734   57398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:35:02.368673   57398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:35:02.382446   57398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:35:02.400505   57398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:35:02.400571   57398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:35:02.413369   57398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:35:02.413430   57398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:35:02.426041   57398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:35:02.435351   57398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:35:02.444268   57398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:35:02.453360   57398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:35:02.461259   57398 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:35:02.461323   57398 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:35:02.473857   57398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:35:02.482145   57398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:35:02.603285   57398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:35:02.794498   57398 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:35:02.794578   57398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:35:02.800219   57398 start.go:540] Will wait 60s for crictl version
	I1108 00:35:02.800277   57398 ssh_runner.go:195] Run: which crictl
	I1108 00:35:02.804095   57398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:35:02.845027   57398 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:35:02.845105   57398 ssh_runner.go:195] Run: crio --version
	I1108 00:35:02.897566   57398 ssh_runner.go:195] Run: crio --version
	I1108 00:35:02.953605   57398 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:35:01.472208   57577 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1108 00:35:01.472376   57577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:35:01.472432   57577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:35:01.491722   57577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I1108 00:35:01.492120   57577 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:35:01.492712   57577 main.go:141] libmachine: Using API Version  1
	I1108 00:35:01.492733   57577 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:35:01.493090   57577 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:35:01.493302   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetMachineName
	I1108 00:35:01.493458   57577 main.go:141] libmachine: (kindnet-010870) Calling .DriverName
	I1108 00:35:01.493629   57577 start.go:159] libmachine.API.Create for "kindnet-010870" (driver="kvm2")
	I1108 00:35:01.493663   57577 client.go:168] LocalClient.Create starting
	I1108 00:35:01.493699   57577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem
	I1108 00:35:01.493739   57577 main.go:141] libmachine: Decoding PEM data...
	I1108 00:35:01.493768   57577 main.go:141] libmachine: Parsing certificate...
	I1108 00:35:01.493858   57577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem
	I1108 00:35:01.493889   57577 main.go:141] libmachine: Decoding PEM data...
	I1108 00:35:01.493905   57577 main.go:141] libmachine: Parsing certificate...
	I1108 00:35:01.493936   57577 main.go:141] libmachine: Running pre-create checks...
	I1108 00:35:01.493953   57577 main.go:141] libmachine: (kindnet-010870) Calling .PreCreateCheck
	I1108 00:35:01.494441   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetConfigRaw
	I1108 00:35:01.494957   57577 main.go:141] libmachine: Creating machine...
	I1108 00:35:01.494976   57577 main.go:141] libmachine: (kindnet-010870) Calling .Create
	I1108 00:35:01.495147   57577 main.go:141] libmachine: (kindnet-010870) Creating KVM machine...
	I1108 00:35:01.496240   57577 main.go:141] libmachine: (kindnet-010870) DBG | found existing default KVM network
	I1108 00:35:01.497345   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:01.497200   57758 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4b:f9:48} reservation:<nil>}
	I1108 00:35:01.498298   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:01.498206   57758 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:59:d5:84} reservation:<nil>}
	I1108 00:35:01.499379   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:01.499307   57758 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a9040}
	I1108 00:35:01.505374   57577 main.go:141] libmachine: (kindnet-010870) DBG | trying to create private KVM network mk-kindnet-010870 192.168.61.0/24...
	I1108 00:35:01.585403   57577 main.go:141] libmachine: (kindnet-010870) DBG | private KVM network mk-kindnet-010870 192.168.61.0/24 created
	I1108 00:35:01.585604   57577 main.go:141] libmachine: (kindnet-010870) Setting up store path in /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870 ...
	I1108 00:35:01.585640   57577 main.go:141] libmachine: (kindnet-010870) Building disk image from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1108 00:35:01.585913   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:01.585558   57758 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:35:01.585939   57577 main.go:141] libmachine: (kindnet-010870) Downloading /home/jenkins/minikube-integration/17585-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1108 00:35:01.799953   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:01.799847   57758 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/id_rsa...
	I1108 00:35:01.905641   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:01.905523   57758 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/kindnet-010870.rawdisk...
	I1108 00:35:01.905676   57577 main.go:141] libmachine: (kindnet-010870) DBG | Writing magic tar header
	I1108 00:35:01.905694   57577 main.go:141] libmachine: (kindnet-010870) DBG | Writing SSH key tar header
	I1108 00:35:01.905712   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:01.905640   57758 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870 ...
	I1108 00:35:01.905770   57577 main.go:141] libmachine: (kindnet-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870
	I1108 00:35:01.905799   57577 main.go:141] libmachine: (kindnet-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870 (perms=drwx------)
	I1108 00:35:01.905818   57577 main.go:141] libmachine: (kindnet-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines
	I1108 00:35:01.905833   57577 main.go:141] libmachine: (kindnet-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:35:01.905850   57577 main.go:141] libmachine: (kindnet-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647
	I1108 00:35:01.905871   57577 main.go:141] libmachine: (kindnet-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1108 00:35:01.905888   57577 main.go:141] libmachine: (kindnet-010870) DBG | Checking permissions on dir: /home/jenkins
	I1108 00:35:01.905904   57577 main.go:141] libmachine: (kindnet-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines (perms=drwxr-xr-x)
	I1108 00:35:01.905920   57577 main.go:141] libmachine: (kindnet-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube (perms=drwxr-xr-x)
	I1108 00:35:01.905929   57577 main.go:141] libmachine: (kindnet-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647 (perms=drwxrwxr-x)
	I1108 00:35:01.905942   57577 main.go:141] libmachine: (kindnet-010870) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1108 00:35:01.905956   57577 main.go:141] libmachine: (kindnet-010870) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1108 00:35:01.905978   57577 main.go:141] libmachine: (kindnet-010870) DBG | Checking permissions on dir: /home
	I1108 00:35:01.905994   57577 main.go:141] libmachine: (kindnet-010870) Creating domain...
	I1108 00:35:01.906004   57577 main.go:141] libmachine: (kindnet-010870) DBG | Skipping /home - not owner
	I1108 00:35:01.907090   57577 main.go:141] libmachine: (kindnet-010870) define libvirt domain using xml: 
	I1108 00:35:01.907116   57577 main.go:141] libmachine: (kindnet-010870) <domain type='kvm'>
	I1108 00:35:01.907129   57577 main.go:141] libmachine: (kindnet-010870)   <name>kindnet-010870</name>
	I1108 00:35:01.907142   57577 main.go:141] libmachine: (kindnet-010870)   <memory unit='MiB'>3072</memory>
	I1108 00:35:01.907157   57577 main.go:141] libmachine: (kindnet-010870)   <vcpu>2</vcpu>
	I1108 00:35:01.907170   57577 main.go:141] libmachine: (kindnet-010870)   <features>
	I1108 00:35:01.907184   57577 main.go:141] libmachine: (kindnet-010870)     <acpi/>
	I1108 00:35:01.907203   57577 main.go:141] libmachine: (kindnet-010870)     <apic/>
	I1108 00:35:01.907216   57577 main.go:141] libmachine: (kindnet-010870)     <pae/>
	I1108 00:35:01.907232   57577 main.go:141] libmachine: (kindnet-010870)     
	I1108 00:35:01.907246   57577 main.go:141] libmachine: (kindnet-010870)   </features>
	I1108 00:35:01.907256   57577 main.go:141] libmachine: (kindnet-010870)   <cpu mode='host-passthrough'>
	I1108 00:35:01.907265   57577 main.go:141] libmachine: (kindnet-010870)   
	I1108 00:35:01.907278   57577 main.go:141] libmachine: (kindnet-010870)   </cpu>
	I1108 00:35:01.907285   57577 main.go:141] libmachine: (kindnet-010870)   <os>
	I1108 00:35:01.907291   57577 main.go:141] libmachine: (kindnet-010870)     <type>hvm</type>
	I1108 00:35:01.907297   57577 main.go:141] libmachine: (kindnet-010870)     <boot dev='cdrom'/>
	I1108 00:35:01.907308   57577 main.go:141] libmachine: (kindnet-010870)     <boot dev='hd'/>
	I1108 00:35:01.907319   57577 main.go:141] libmachine: (kindnet-010870)     <bootmenu enable='no'/>
	I1108 00:35:01.907332   57577 main.go:141] libmachine: (kindnet-010870)   </os>
	I1108 00:35:01.907342   57577 main.go:141] libmachine: (kindnet-010870)   <devices>
	I1108 00:35:01.907352   57577 main.go:141] libmachine: (kindnet-010870)     <disk type='file' device='cdrom'>
	I1108 00:35:01.907369   57577 main.go:141] libmachine: (kindnet-010870)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/boot2docker.iso'/>
	I1108 00:35:01.907377   57577 main.go:141] libmachine: (kindnet-010870)       <target dev='hdc' bus='scsi'/>
	I1108 00:35:01.907384   57577 main.go:141] libmachine: (kindnet-010870)       <readonly/>
	I1108 00:35:01.907391   57577 main.go:141] libmachine: (kindnet-010870)     </disk>
	I1108 00:35:01.907398   57577 main.go:141] libmachine: (kindnet-010870)     <disk type='file' device='disk'>
	I1108 00:35:01.907408   57577 main.go:141] libmachine: (kindnet-010870)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1108 00:35:01.907423   57577 main.go:141] libmachine: (kindnet-010870)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/kindnet-010870.rawdisk'/>
	I1108 00:35:01.907436   57577 main.go:141] libmachine: (kindnet-010870)       <target dev='hda' bus='virtio'/>
	I1108 00:35:01.907450   57577 main.go:141] libmachine: (kindnet-010870)     </disk>
	I1108 00:35:01.907467   57577 main.go:141] libmachine: (kindnet-010870)     <interface type='network'>
	I1108 00:35:01.907478   57577 main.go:141] libmachine: (kindnet-010870)       <source network='mk-kindnet-010870'/>
	I1108 00:35:01.907483   57577 main.go:141] libmachine: (kindnet-010870)       <model type='virtio'/>
	I1108 00:35:01.907498   57577 main.go:141] libmachine: (kindnet-010870)     </interface>
	I1108 00:35:01.907506   57577 main.go:141] libmachine: (kindnet-010870)     <interface type='network'>
	I1108 00:35:01.907520   57577 main.go:141] libmachine: (kindnet-010870)       <source network='default'/>
	I1108 00:35:01.907531   57577 main.go:141] libmachine: (kindnet-010870)       <model type='virtio'/>
	I1108 00:35:01.907560   57577 main.go:141] libmachine: (kindnet-010870)     </interface>
	I1108 00:35:01.907585   57577 main.go:141] libmachine: (kindnet-010870)     <serial type='pty'>
	I1108 00:35:01.907619   57577 main.go:141] libmachine: (kindnet-010870)       <target port='0'/>
	I1108 00:35:01.907648   57577 main.go:141] libmachine: (kindnet-010870)     </serial>
	I1108 00:35:01.907666   57577 main.go:141] libmachine: (kindnet-010870)     <console type='pty'>
	I1108 00:35:01.907685   57577 main.go:141] libmachine: (kindnet-010870)       <target type='serial' port='0'/>
	I1108 00:35:01.907699   57577 main.go:141] libmachine: (kindnet-010870)     </console>
	I1108 00:35:01.907711   57577 main.go:141] libmachine: (kindnet-010870)     <rng model='virtio'>
	I1108 00:35:01.907726   57577 main.go:141] libmachine: (kindnet-010870)       <backend model='random'>/dev/random</backend>
	I1108 00:35:01.907737   57577 main.go:141] libmachine: (kindnet-010870)     </rng>
	I1108 00:35:01.907750   57577 main.go:141] libmachine: (kindnet-010870)     
	I1108 00:35:01.907765   57577 main.go:141] libmachine: (kindnet-010870)     
	I1108 00:35:01.907779   57577 main.go:141] libmachine: (kindnet-010870)   </devices>
	I1108 00:35:01.907788   57577 main.go:141] libmachine: (kindnet-010870) </domain>
	I1108 00:35:01.907803   57577 main.go:141] libmachine: (kindnet-010870) 
	I1108 00:35:01.914573   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:a5:13:35 in network default
	I1108 00:35:01.915190   57577 main.go:141] libmachine: (kindnet-010870) Ensuring networks are active...
	I1108 00:35:01.915214   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:01.915864   57577 main.go:141] libmachine: (kindnet-010870) Ensuring network default is active
	I1108 00:35:01.916177   57577 main.go:141] libmachine: (kindnet-010870) Ensuring network mk-kindnet-010870 is active
	I1108 00:35:01.916697   57577 main.go:141] libmachine: (kindnet-010870) Getting domain xml...
	I1108 00:35:01.917461   57577 main.go:141] libmachine: (kindnet-010870) Creating domain...
	I1108 00:35:02.954948   57398 main.go:141] libmachine: (auto-010870) Calling .GetIP
	I1108 00:35:02.958071   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:02.958541   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:02.958574   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:02.958795   57398 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1108 00:35:02.963833   57398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:35:02.977191   57398 localpath.go:92] copying /home/jenkins/minikube-integration/17585-9647/.minikube/client.crt -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/client.crt
	I1108 00:35:02.977340   57398 localpath.go:117] copying /home/jenkins/minikube-integration/17585-9647/.minikube/client.key -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/client.key
	I1108 00:35:02.977443   57398 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:35:02.977481   57398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:35:03.015836   57398 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:35:03.015919   57398 ssh_runner.go:195] Run: which lz4
	I1108 00:35:03.021874   57398 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:35:03.028122   57398 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:35:03.028151   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:35:04.963211   57398 crio.go:444] Took 1.941375 seconds to copy over tarball
	I1108 00:35:04.963278   57398 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:35:03.303619   57577 main.go:141] libmachine: (kindnet-010870) Waiting to get IP...
	I1108 00:35:03.304476   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:03.304890   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:03.304935   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:03.304866   57758 retry.go:31] will retry after 224.426387ms: waiting for machine to come up
	I1108 00:35:03.531496   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:03.532153   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:03.532178   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:03.532078   57758 retry.go:31] will retry after 234.504771ms: waiting for machine to come up
	I1108 00:35:03.768706   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:03.769239   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:03.769265   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:03.769154   57758 retry.go:31] will retry after 322.608564ms: waiting for machine to come up
	I1108 00:35:04.093790   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:04.094369   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:04.094399   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:04.094333   57758 retry.go:31] will retry after 384.30528ms: waiting for machine to come up
	I1108 00:35:04.479804   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:04.480222   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:04.480253   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:04.480198   57758 retry.go:31] will retry after 674.009891ms: waiting for machine to come up
	I1108 00:35:05.156187   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:05.156670   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:05.156707   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:05.156588   57758 retry.go:31] will retry after 886.164066ms: waiting for machine to come up
	I1108 00:35:06.044717   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:06.045301   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:06.045344   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:06.045226   57758 retry.go:31] will retry after 1.023741928s: waiting for machine to come up
	I1108 00:35:07.070452   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:07.071009   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:07.071060   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:07.070968   57758 retry.go:31] will retry after 1.464468763s: waiting for machine to come up
	I1108 00:35:08.239016   57398 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.27570295s)
	I1108 00:35:08.239041   57398 crio.go:451] Took 3.275804 seconds to extract the tarball
	I1108 00:35:08.239066   57398 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:35:08.283657   57398 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:35:08.363146   57398 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:35:08.363173   57398 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:35:08.363259   57398 ssh_runner.go:195] Run: crio config
	I1108 00:35:08.430116   57398 cni.go:84] Creating CNI manager for ""
	I1108 00:35:08.430139   57398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:35:08.430155   57398 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:35:08.430174   57398 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-010870 NodeName:auto-010870 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:35:08.430294   57398 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-010870"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:35:08.430371   57398 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=auto-010870 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:auto-010870 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:35:08.430426   57398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:35:08.441418   57398 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:35:08.441478   57398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:35:08.451687   57398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I1108 00:35:08.471865   57398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:35:08.489068   57398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I1108 00:35:08.508155   57398 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I1108 00:35:08.512051   57398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:35:08.526135   57398 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870 for IP: 192.168.50.47
	I1108 00:35:08.526162   57398 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:35:08.526299   57398 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:35:08.526365   57398 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:35:08.526471   57398 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/client.key
	I1108 00:35:08.526500   57398 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.key.6a3aec60
	I1108 00:35:08.526517   57398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.crt.6a3aec60 with IP's: [192.168.50.47 10.96.0.1 127.0.0.1 10.0.0.1]
	I1108 00:35:08.594414   57398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.crt.6a3aec60 ...
	I1108 00:35:08.594450   57398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.crt.6a3aec60: {Name:mk068e30126c4fb32986a7d8b8eb8d887cd537ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:35:08.594634   57398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.key.6a3aec60 ...
	I1108 00:35:08.594654   57398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.key.6a3aec60: {Name:mkf056a8558e0f6fc88822394d186cc54c8e9453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:35:08.594771   57398 certs.go:337] copying /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.crt.6a3aec60 -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.crt
	I1108 00:35:08.594856   57398 certs.go:341] copying /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.key.6a3aec60 -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.key
	I1108 00:35:08.594929   57398 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/proxy-client.key
	I1108 00:35:08.594947   57398 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/proxy-client.crt with IP's: []
	I1108 00:35:08.705391   57398 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/proxy-client.crt ...
	I1108 00:35:08.705421   57398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/proxy-client.crt: {Name:mk8bcd1633cf041b7aae659aafda5a403dc95084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:35:08.705574   57398 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/proxy-client.key ...
	I1108 00:35:08.705590   57398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/proxy-client.key: {Name:mk307625275037220f37561fef7f584917a2a6db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:35:08.705820   57398 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:35:08.705865   57398 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:35:08.705883   57398 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:35:08.705912   57398 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:35:08.705937   57398 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:35:08.705961   57398 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:35:08.705997   57398 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:35:08.706563   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:35:08.732872   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 00:35:08.760542   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:35:08.785041   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/auto-010870/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:35:08.809361   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:35:08.836282   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:35:08.861688   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:35:08.886747   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:35:08.911004   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:35:08.935818   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:35:08.959337   57398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:35:08.984823   57398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:35:09.002186   57398 ssh_runner.go:195] Run: openssl version
	I1108 00:35:09.008162   57398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:35:09.019864   57398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:35:09.025043   57398 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:35:09.025100   57398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:35:09.032368   57398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:35:09.044525   57398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:35:09.058555   57398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:35:09.064671   57398 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:35:09.064741   57398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:35:09.071087   57398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:35:09.083178   57398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:35:09.095206   57398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:35:09.100064   57398 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:35:09.100159   57398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:35:09.106661   57398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:35:09.119869   57398 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:35:09.124192   57398 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1108 00:35:09.124253   57398 kubeadm.go:404] StartCluster: {Name:auto-010870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:auto-010870 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:35:09.124349   57398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:35:09.124404   57398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:35:09.168443   57398 cri.go:89] found id: ""
	I1108 00:35:09.168547   57398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:35:09.181770   57398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:35:09.192541   57398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:35:09.202740   57398 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:35:09.202792   57398 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:35:09.259038   57398 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:35:09.259113   57398 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:35:09.421219   57398 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:35:09.421364   57398 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:35:09.421547   57398 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:35:09.665761   57398 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:35:09.751634   57398 out.go:204]   - Generating certificates and keys ...
	I1108 00:35:09.751843   57398 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:35:09.751959   57398 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:35:09.797868   57398 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 00:35:09.927829   57398 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1108 00:35:10.185877   57398 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1108 00:35:10.380249   57398 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1108 00:35:10.574790   57398 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1108 00:35:10.574996   57398 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-010870 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	I1108 00:35:10.793790   57398 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1108 00:35:10.793949   57398 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-010870 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	I1108 00:35:10.865211   57398 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 00:35:11.091476   57398 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 00:35:11.368343   57398 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1108 00:35:11.368442   57398 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:35:11.559094   57398 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:35:11.824984   57398 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:35:11.995818   57398 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:35:12.139465   57398 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:35:12.140197   57398 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:35:12.142420   57398 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:35:08.537557   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:08.538031   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:08.538055   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:08.537987   57758 retry.go:31] will retry after 1.75990095s: waiting for machine to come up
	I1108 00:35:10.299057   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:10.299523   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:10.299554   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:10.299466   57758 retry.go:31] will retry after 2.228464869s: waiting for machine to come up
	I1108 00:35:12.529125   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:12.529655   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:12.529683   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:12.529613   57758 retry.go:31] will retry after 1.907282943s: waiting for machine to come up
	I1108 00:35:12.144300   57398 out.go:204]   - Booting up control plane ...
	I1108 00:35:12.144445   57398 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:35:12.144570   57398 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:35:12.144760   57398 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:35:12.164082   57398 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:35:12.165197   57398 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:35:12.165265   57398 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:35:12.328936   57398 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:35:14.439259   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:14.439816   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:14.439845   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:14.439768   57758 retry.go:31] will retry after 2.657450123s: waiting for machine to come up
	I1108 00:35:17.098705   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:17.099110   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:17.099140   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:17.099069   57758 retry.go:31] will retry after 3.262226729s: waiting for machine to come up
	I1108 00:35:20.331601   57398 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002971 seconds
	I1108 00:35:20.331761   57398 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:35:20.352908   57398 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:35:20.887399   57398 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:35:20.887644   57398 kubeadm.go:322] [mark-control-plane] Marking the node auto-010870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:35:21.401527   57398 kubeadm.go:322] [bootstrap-token] Using token: v0qso2.af54rwvf5v7fcvhp
	I1108 00:35:21.402886   57398 out.go:204]   - Configuring RBAC rules ...
	I1108 00:35:21.403000   57398 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:35:21.408218   57398 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:35:21.418445   57398 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:35:21.422244   57398 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:35:21.429254   57398 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:35:21.437699   57398 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:35:21.460740   57398 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:35:21.714922   57398 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:35:21.822717   57398 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:35:21.822738   57398 kubeadm.go:322] 
	I1108 00:35:21.822845   57398 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:35:21.822869   57398 kubeadm.go:322] 
	I1108 00:35:21.822966   57398 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:35:21.822977   57398 kubeadm.go:322] 
	I1108 00:35:21.823018   57398 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:35:21.823103   57398 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:35:21.823182   57398 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:35:21.823190   57398 kubeadm.go:322] 
	I1108 00:35:21.823263   57398 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:35:21.823273   57398 kubeadm.go:322] 
	I1108 00:35:21.823375   57398 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:35:21.823397   57398 kubeadm.go:322] 
	I1108 00:35:21.823458   57398 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:35:21.823574   57398 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:35:21.823670   57398 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:35:21.823693   57398 kubeadm.go:322] 
	I1108 00:35:21.823808   57398 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:35:21.823923   57398 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:35:21.823942   57398 kubeadm.go:322] 
	I1108 00:35:21.824049   57398 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v0qso2.af54rwvf5v7fcvhp \
	I1108 00:35:21.824167   57398 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:35:21.824205   57398 kubeadm.go:322] 	--control-plane 
	I1108 00:35:21.824214   57398 kubeadm.go:322] 
	I1108 00:35:21.824309   57398 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:35:21.824318   57398 kubeadm.go:322] 
	I1108 00:35:21.824414   57398 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v0qso2.af54rwvf5v7fcvhp \
	I1108 00:35:21.824555   57398 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:35:21.824782   57398 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:35:21.824825   57398 cni.go:84] Creating CNI manager for ""
	I1108 00:35:21.824839   57398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:35:21.826491   57398 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:35:21.827712   57398 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:35:21.900024   57398 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:35:21.965290   57398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:35:21.965337   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:21.965429   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=auto-010870 minikube.k8s.io/updated_at=2023_11_08T00_35_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:20.362590   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:20.363107   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find current IP address of domain kindnet-010870 in network mk-kindnet-010870
	I1108 00:35:20.363129   57577 main.go:141] libmachine: (kindnet-010870) DBG | I1108 00:35:20.363049   57758 retry.go:31] will retry after 4.883008327s: waiting for machine to come up
	I1108 00:35:22.204449   57398 ops.go:34] apiserver oom_adj: -16
	I1108 00:35:22.204728   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:22.326911   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:22.948788   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:23.448589   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:23.948521   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:24.449049   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:24.949172   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:25.449392   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:25.948656   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:26.448650   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:26.949385   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:25.249920   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:25.250494   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has current primary IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:25.250530   57577 main.go:141] libmachine: (kindnet-010870) Found IP for machine: 192.168.61.17
	I1108 00:35:25.250550   57577 main.go:141] libmachine: (kindnet-010870) Reserving static IP address...
	I1108 00:35:25.250856   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find host DHCP lease matching {name: "kindnet-010870", mac: "52:54:00:5f:a8:02", ip: "192.168.61.17"} in network mk-kindnet-010870
	I1108 00:35:25.324950   57577 main.go:141] libmachine: (kindnet-010870) DBG | Getting to WaitForSSH function...
	I1108 00:35:25.324985   57577 main.go:141] libmachine: (kindnet-010870) Reserved static IP address: 192.168.61.17
	I1108 00:35:25.324999   57577 main.go:141] libmachine: (kindnet-010870) Waiting for SSH to be available...
	I1108 00:35:25.327699   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:25.328031   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870
	I1108 00:35:25.328062   57577 main.go:141] libmachine: (kindnet-010870) DBG | unable to find defined IP address of network mk-kindnet-010870 interface with MAC address 52:54:00:5f:a8:02
	I1108 00:35:25.328262   57577 main.go:141] libmachine: (kindnet-010870) DBG | Using SSH client type: external
	I1108 00:35:25.328278   57577 main.go:141] libmachine: (kindnet-010870) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/id_rsa (-rw-------)
	I1108 00:35:25.328296   57577 main.go:141] libmachine: (kindnet-010870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:35:25.328308   57577 main.go:141] libmachine: (kindnet-010870) DBG | About to run SSH command:
	I1108 00:35:25.328344   57577 main.go:141] libmachine: (kindnet-010870) DBG | exit 0
	I1108 00:35:25.331880   57577 main.go:141] libmachine: (kindnet-010870) DBG | SSH cmd err, output: exit status 255: 
	I1108 00:35:25.331907   57577 main.go:141] libmachine: (kindnet-010870) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1108 00:35:25.331919   57577 main.go:141] libmachine: (kindnet-010870) DBG | command : exit 0
	I1108 00:35:25.331928   57577 main.go:141] libmachine: (kindnet-010870) DBG | err     : exit status 255
	I1108 00:35:25.331940   57577 main.go:141] libmachine: (kindnet-010870) DBG | output  : 
	I1108 00:35:28.332992   57577 main.go:141] libmachine: (kindnet-010870) DBG | Getting to WaitForSSH function...
	I1108 00:35:28.335687   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.336041   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:28.336074   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.336234   57577 main.go:141] libmachine: (kindnet-010870) DBG | Using SSH client type: external
	I1108 00:35:28.336262   57577 main.go:141] libmachine: (kindnet-010870) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/id_rsa (-rw-------)
	I1108 00:35:28.336304   57577 main.go:141] libmachine: (kindnet-010870) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:35:28.336326   57577 main.go:141] libmachine: (kindnet-010870) DBG | About to run SSH command:
	I1108 00:35:28.336342   57577 main.go:141] libmachine: (kindnet-010870) DBG | exit 0
	I1108 00:35:28.424865   57577 main.go:141] libmachine: (kindnet-010870) DBG | SSH cmd err, output: <nil>: 
	I1108 00:35:28.425107   57577 main.go:141] libmachine: (kindnet-010870) KVM machine creation complete!
	I1108 00:35:28.425444   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetConfigRaw
	I1108 00:35:28.425968   57577 main.go:141] libmachine: (kindnet-010870) Calling .DriverName
	I1108 00:35:28.426183   57577 main.go:141] libmachine: (kindnet-010870) Calling .DriverName
	I1108 00:35:28.426349   57577 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1108 00:35:28.426366   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetState
	I1108 00:35:28.427658   57577 main.go:141] libmachine: Detecting operating system of created instance...
	I1108 00:35:28.427675   57577 main.go:141] libmachine: Waiting for SSH to be available...
	I1108 00:35:28.427684   57577 main.go:141] libmachine: Getting to WaitForSSH function...
	I1108 00:35:28.427693   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:28.429899   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.430268   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:28.430300   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.430502   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:28.430708   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:28.430834   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:28.430973   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:28.431167   57577 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:28.431702   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I1108 00:35:28.431716   57577 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1108 00:35:28.552324   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:35:28.552348   57577 main.go:141] libmachine: Detecting the provisioner...
	I1108 00:35:28.552359   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:28.555462   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.555964   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:28.556010   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.556344   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:28.556545   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:28.556750   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:28.556929   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:28.557118   57577 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:28.557617   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I1108 00:35:28.557637   57577 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1108 00:35:28.669682   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb75713b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1108 00:35:28.669763   57577 main.go:141] libmachine: found compatible host: buildroot
	I1108 00:35:28.669778   57577 main.go:141] libmachine: Provisioning with buildroot...
	I1108 00:35:28.669790   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetMachineName
	I1108 00:35:28.670097   57577 buildroot.go:166] provisioning hostname "kindnet-010870"
	I1108 00:35:28.670124   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetMachineName
	I1108 00:35:28.670323   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:28.673317   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.673679   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:28.673721   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.673842   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:28.674034   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:28.674196   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:28.674332   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:28.674494   57577 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:28.674845   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I1108 00:35:28.674871   57577 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-010870 && echo "kindnet-010870" | sudo tee /etc/hostname
	I1108 00:35:28.802454   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-010870
	
	I1108 00:35:28.802486   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:28.805333   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.805695   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:28.805722   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.805896   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:28.806105   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:28.806259   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:28.806431   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:28.806595   57577 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:28.806908   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I1108 00:35:28.806925   57577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-010870' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-010870/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-010870' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:35:28.929336   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:35:28.929382   57577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:35:28.929406   57577 buildroot.go:174] setting up certificates
	I1108 00:35:28.929424   57577 provision.go:83] configureAuth start
	I1108 00:35:28.929440   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetMachineName
	I1108 00:35:28.929696   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetIP
	I1108 00:35:28.932292   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.932671   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:28.932702   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.932878   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:28.934962   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.935340   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:28.935365   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:28.935539   57577 provision.go:138] copyHostCerts
	I1108 00:35:28.935599   57577 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:35:28.935617   57577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:35:28.935690   57577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:35:28.935804   57577 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:35:28.935814   57577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:35:28.935834   57577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:35:28.935924   57577 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:35:28.935932   57577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:35:28.935950   57577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:35:28.936004   57577 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.kindnet-010870 san=[192.168.61.17 192.168.61.17 localhost 127.0.0.1 minikube kindnet-010870]
	I1108 00:35:29.180987   57577 provision.go:172] copyRemoteCerts
	I1108 00:35:29.181047   57577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:35:29.181073   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:29.183761   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.184078   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:29.184099   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.184332   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:29.184537   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:29.184709   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:29.184855   57577 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/id_rsa Username:docker}
	I1108 00:35:29.271312   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:35:29.294068   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:35:29.316189   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 00:35:29.338736   57577 provision.go:86] duration metric: configureAuth took 409.299022ms
	I1108 00:35:29.338758   57577 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:35:29.338928   57577 config.go:182] Loaded profile config "kindnet-010870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:35:29.338989   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:29.341641   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.342097   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:29.342125   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.342310   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:29.342496   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:29.342675   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:29.342811   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:29.342956   57577 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:29.343285   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I1108 00:35:29.343307   57577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:35:29.683309   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:35:29.683337   57577 main.go:141] libmachine: Checking connection to Docker...
	I1108 00:35:29.683360   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetURL
	I1108 00:35:29.684591   57577 main.go:141] libmachine: (kindnet-010870) DBG | Using libvirt version 6000000
	I1108 00:35:29.686766   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.687143   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:29.687200   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.687337   57577 main.go:141] libmachine: Docker is up and running!
	I1108 00:35:29.687349   57577 main.go:141] libmachine: Reticulating splines...
	I1108 00:35:29.687354   57577 client.go:171] LocalClient.Create took 28.193681432s
	I1108 00:35:29.687375   57577 start.go:167] duration metric: libmachine.API.Create for "kindnet-010870" took 28.19374797s
	I1108 00:35:29.687387   57577 start.go:300] post-start starting for "kindnet-010870" (driver="kvm2")
	I1108 00:35:29.687402   57577 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:35:29.687421   57577 main.go:141] libmachine: (kindnet-010870) Calling .DriverName
	I1108 00:35:29.687666   57577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:35:29.687702   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:29.689951   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.690362   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:29.690423   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.690558   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:29.690747   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:29.690909   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:29.691063   57577 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/id_rsa Username:docker}
	I1108 00:35:29.782453   57577 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:35:29.786982   57577 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:35:29.787008   57577 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:35:29.787078   57577 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:35:29.787182   57577 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:35:29.787298   57577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:35:29.796174   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:35:29.819033   57577 start.go:303] post-start completed in 131.625885ms
	I1108 00:35:29.819076   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetConfigRaw
	I1108 00:35:29.819650   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetIP
	I1108 00:35:29.822416   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.822781   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:29.822818   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.823066   57577 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/kindnet-010870/config.json ...
	I1108 00:35:29.823269   57577 start.go:128] duration metric: createHost completed in 28.353391472s
	I1108 00:35:29.823290   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:29.825703   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.826026   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:29.826053   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.826191   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:29.826367   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:29.826517   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:29.826643   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:29.826829   57577 main.go:141] libmachine: Using SSH client type: native
	I1108 00:35:29.827140   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I1108 00:35:29.827150   57577 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:35:29.941392   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699403729.926864858
	
	I1108 00:35:29.941414   57577 fix.go:206] guest clock: 1699403729.926864858
	I1108 00:35:29.941423   57577 fix.go:219] Guest: 2023-11-08 00:35:29.926864858 +0000 UTC Remote: 2023-11-08 00:35:29.82328059 +0000 UTC m=+56.699532238 (delta=103.584268ms)
	I1108 00:35:29.941456   57577 fix.go:190] guest clock delta is within tolerance: 103.584268ms
	I1108 00:35:29.941477   57577 start.go:83] releasing machines lock for "kindnet-010870", held for 28.471743244s
	I1108 00:35:29.941505   57577 main.go:141] libmachine: (kindnet-010870) Calling .DriverName
	I1108 00:35:29.941727   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetIP
	I1108 00:35:29.944250   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.944669   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:29.944697   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.944853   57577 main.go:141] libmachine: (kindnet-010870) Calling .DriverName
	I1108 00:35:29.945370   57577 main.go:141] libmachine: (kindnet-010870) Calling .DriverName
	I1108 00:35:29.945505   57577 main.go:141] libmachine: (kindnet-010870) Calling .DriverName
	I1108 00:35:29.945601   57577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:35:29.945636   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:29.945694   57577 ssh_runner.go:195] Run: cat /version.json
	I1108 00:35:29.945718   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHHostname
	I1108 00:35:29.948279   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.948435   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.948755   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:29.948784   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.948854   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:29.948885   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:29.948909   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:29.949077   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:29.949163   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHPort
	I1108 00:35:29.949223   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:29.949300   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHKeyPath
	I1108 00:35:29.949373   57577 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/id_rsa Username:docker}
	I1108 00:35:29.949437   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetSSHUsername
	I1108 00:35:29.949568   57577 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/kindnet-010870/id_rsa Username:docker}
	I1108 00:35:30.054631   57577 ssh_runner.go:195] Run: systemctl --version
	I1108 00:35:30.060187   57577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:35:30.217027   57577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:35:30.223564   57577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:35:30.223647   57577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:35:30.237615   57577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:35:30.237637   57577 start.go:472] detecting cgroup driver to use...
	I1108 00:35:30.237692   57577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:35:30.254023   57577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:35:30.267232   57577 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:35:30.267272   57577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:35:30.280449   57577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:35:30.294258   57577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:35:30.429837   57577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:35:30.565590   57577 docker.go:219] disabling docker service ...
	I1108 00:35:30.565721   57577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:35:30.580847   57577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:35:30.594002   57577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:35:30.718269   57577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:35:30.837382   57577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:35:30.850436   57577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:35:30.868254   57577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:35:30.868318   57577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:35:30.878025   57577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:35:30.878081   57577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:35:30.887518   57577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:35:30.896735   57577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:35:30.906589   57577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:35:30.916504   57577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:35:30.924735   57577 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:35:30.924789   57577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:35:30.936935   57577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:35:30.945709   57577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:35:31.059028   57577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:35:31.231288   57577 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:35:31.231379   57577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:35:31.240293   57577 start.go:540] Will wait 60s for crictl version
	I1108 00:35:31.240353   57577 ssh_runner.go:195] Run: which crictl
	I1108 00:35:31.244343   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:35:31.285154   57577 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:35:31.285261   57577 ssh_runner.go:195] Run: crio --version
	I1108 00:35:31.338689   57577 ssh_runner.go:195] Run: crio --version
	I1108 00:35:31.396249   57577 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:35:27.448947   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:27.949253   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:28.448579   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:28.948500   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:29.448618   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:29.948910   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:30.448975   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:30.948422   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:31.449115   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:31.948476   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:31.397697   57577 main.go:141] libmachine: (kindnet-010870) Calling .GetIP
	I1108 00:35:31.400359   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:31.400824   57577 main.go:141] libmachine: (kindnet-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:a8:02", ip: ""} in network mk-kindnet-010870: {Iface:virbr4 ExpiryTime:2023-11-08 01:35:18 +0000 UTC Type:0 Mac:52:54:00:5f:a8:02 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:kindnet-010870 Clientid:01:52:54:00:5f:a8:02}
	I1108 00:35:31.400854   57577 main.go:141] libmachine: (kindnet-010870) DBG | domain kindnet-010870 has defined IP address 192.168.61.17 and MAC address 52:54:00:5f:a8:02 in network mk-kindnet-010870
	I1108 00:35:31.401021   57577 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1108 00:35:31.405309   57577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:35:31.418093   57577 localpath.go:92] copying /home/jenkins/minikube-integration/17585-9647/.minikube/client.crt -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/kindnet-010870/client.crt
	I1108 00:35:31.418250   57577 localpath.go:117] copying /home/jenkins/minikube-integration/17585-9647/.minikube/client.key -> /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/kindnet-010870/client.key
	I1108 00:35:31.418381   57577 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:35:31.418461   57577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:35:31.458922   57577 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:35:31.458993   57577 ssh_runner.go:195] Run: which lz4
	I1108 00:35:31.463188   57577 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:35:31.467242   57577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:35:31.467275   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:35:32.448723   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:32.948626   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:33.449473   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:33.948853   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:34.448702   57398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:35:34.639250   57398 kubeadm.go:1081] duration metric: took 12.673962504s to wait for elevateKubeSystemPrivileges.
	I1108 00:35:34.639279   57398 kubeadm.go:406] StartCluster complete in 25.515029833s
	I1108 00:35:34.639302   57398 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:35:34.639376   57398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:35:34.641768   57398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:35:34.642045   57398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:35:34.642196   57398 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:35:34.642282   57398 addons.go:69] Setting storage-provisioner=true in profile "auto-010870"
	I1108 00:35:34.642302   57398 config.go:182] Loaded profile config "auto-010870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:35:34.642315   57398 addons.go:69] Setting default-storageclass=true in profile "auto-010870"
	I1108 00:35:34.642330   57398 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-010870"
	I1108 00:35:34.642306   57398 addons.go:231] Setting addon storage-provisioner=true in "auto-010870"
	I1108 00:35:34.642420   57398 host.go:66] Checking if "auto-010870" exists ...
	I1108 00:35:34.642870   57398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:35:34.642901   57398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:35:34.642905   57398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:35:34.642934   57398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:35:34.659472   57398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I1108 00:35:34.660052   57398 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:35:34.660677   57398 main.go:141] libmachine: Using API Version  1
	I1108 00:35:34.660696   57398 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:35:34.661217   57398 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:35:34.661280   57398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40385
	I1108 00:35:34.661566   57398 main.go:141] libmachine: (auto-010870) Calling .GetState
	I1108 00:35:34.661587   57398 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:35:34.662066   57398 main.go:141] libmachine: Using API Version  1
	I1108 00:35:34.662089   57398 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:35:34.662461   57398 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:35:34.663052   57398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:35:34.663096   57398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:35:34.665181   57398 addons.go:231] Setting addon default-storageclass=true in "auto-010870"
	I1108 00:35:34.665220   57398 host.go:66] Checking if "auto-010870" exists ...
	I1108 00:35:34.665622   57398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:35:34.665660   57398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:35:34.685265   57398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38925
	I1108 00:35:34.685279   57398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I1108 00:35:34.685719   57398 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:35:34.685766   57398 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:35:34.686259   57398 main.go:141] libmachine: Using API Version  1
	I1108 00:35:34.686279   57398 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:35:34.686373   57398 main.go:141] libmachine: Using API Version  1
	I1108 00:35:34.686395   57398 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:35:34.686642   57398 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:35:34.686680   57398 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:35:34.686803   57398 main.go:141] libmachine: (auto-010870) Calling .GetState
	I1108 00:35:34.687208   57398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:35:34.687235   57398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:35:34.688711   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:35:34.690600   57398 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:35:34.692412   57398 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:35:34.692428   57398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:35:34.692443   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:34.695711   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:34.696085   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:34.696107   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:34.696394   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:34.696574   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:34.696716   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:34.696875   57398 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa Username:docker}
	I1108 00:35:34.703131   57398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I1108 00:35:34.703503   57398 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:35:34.704010   57398 main.go:141] libmachine: Using API Version  1
	I1108 00:35:34.704026   57398 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:35:34.704446   57398 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:35:34.704609   57398 main.go:141] libmachine: (auto-010870) Calling .GetState
	I1108 00:35:34.706091   57398 main.go:141] libmachine: (auto-010870) Calling .DriverName
	I1108 00:35:34.706315   57398 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:35:34.706329   57398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:35:34.706344   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHHostname
	I1108 00:35:34.709081   57398 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-010870" context rescaled to 1 replicas
	I1108 00:35:34.709110   57398 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:35:34.711357   57398 out.go:177] * Verifying Kubernetes components...
	I1108 00:35:34.709283   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:34.711221   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHPort
	I1108 00:35:34.713127   57398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:35:34.713179   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHKeyPath
	I1108 00:35:34.713204   57398 main.go:141] libmachine: (auto-010870) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:05:5f", ip: ""} in network mk-auto-010870: {Iface:virbr3 ExpiryTime:2023-11-08 01:34:48 +0000 UTC Type:0 Mac:52:54:00:89:05:5f Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-010870 Clientid:01:52:54:00:89:05:5f}
	I1108 00:35:34.713229   57398 main.go:141] libmachine: (auto-010870) DBG | domain auto-010870 has defined IP address 192.168.50.47 and MAC address 52:54:00:89:05:5f in network mk-auto-010870
	I1108 00:35:34.713552   57398 main.go:141] libmachine: (auto-010870) Calling .GetSSHUsername
	I1108 00:35:34.713754   57398 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/auto-010870/id_rsa Username:docker}
	I1108 00:35:34.920041   57398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:35:34.921574   57398 node_ready.go:35] waiting up to 15m0s for node "auto-010870" to be "Ready" ...
	I1108 00:35:34.927510   57398 node_ready.go:49] node "auto-010870" has status "Ready":"True"
	I1108 00:35:34.927536   57398 node_ready.go:38] duration metric: took 5.932422ms waiting for node "auto-010870" to be "Ready" ...
	I1108 00:35:34.927548   57398 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:35:34.936544   57398 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-pj8hz" in "kube-system" namespace to be "Ready" ...
	I1108 00:35:34.944312   57398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:35:34.963448   57398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-08 00:13:11 UTC, ends at Wed 2023-11-08 00:35:38 UTC. --
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.282369617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0,PodSandboxId:e704b69630a14bc150790444bb9f5922934520bd59b741034b3af030dd3154bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402723944311961,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa05e7e5-87e7-43ac-af74-1c8a713b51c5,},Annotations:map[string]string{io.kubernetes.container.hash: f08330f1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205,PodSandboxId:d39a130850a3305fe58ff1962843f8f4abf944490777b24aa7bd64ee8f734a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402723536620550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-thtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3671b72-d562-4be2-9942-e971ee31b2c3,},Annotations:map[string]string{io.kubernetes.container.hash: 4e6a9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9,PodSandboxId:5d7fbc7f78bd27da40d11ae605c7c5545720800493c6651c0f3a24d40665dd5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402721164054403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shp9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cda240f2-977b-4318-9ee4-74f0090af489,},Annotations:map[string]string{io.kubernetes.container.hash: d10e2de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2,PodSandboxId:27d4c7691e43457e1dae6953ce7530ad9a019bf5ff5121dc0a25dfac10c95fc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402699250172747,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-253253,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: dece3072a963622363344a68ed68f60a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806,PodSandboxId:fe825b3fb7a8fab84c5cfcf27725b9039ef08f7add8c41908d01f9050c44bc5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402699278926824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-253253,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: fd3dad67cbb105bf1c12cfa4d77a5516,},Annotations:map[string]string{io.kubernetes.container.hash: 460b5609,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613,PodSandboxId:f4f214e73ec2525d4a6ed2b0a4f16328c717f1110c3d5ce773f0c67603c24bd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402698892453284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12f202cfa4431635b8e608b4139d09ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5,PodSandboxId:4c8572e0a6c42cf4a4f04757b8b3c240f6fddedc7403ae4b04dcb5ca209adc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402698987017299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be66e0dfe0c5d13f7ee475b7a4c8e76b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4db098a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aaee7ae0-8bdf-4d54-b000-88827200ee85 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.320128859Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0dd1224c-41d4-4d00-8f73-7226a878bb8e name=/runtime.v1.RuntimeService/Version
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.320184826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0dd1224c-41d4-4d00-8f73-7226a878bb8e name=/runtime.v1.RuntimeService/Version
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.321386559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=829b90f8-5daa-41e4-9e0c-b104c76d070c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.321870965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403738321856018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=829b90f8-5daa-41e4-9e0c-b104c76d070c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.322534456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a59196bd-a49c-45b1-9c97-251c6787ec18 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.322603484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a59196bd-a49c-45b1-9c97-251c6787ec18 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.322904407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0,PodSandboxId:e704b69630a14bc150790444bb9f5922934520bd59b741034b3af030dd3154bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402723944311961,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa05e7e5-87e7-43ac-af74-1c8a713b51c5,},Annotations:map[string]string{io.kubernetes.container.hash: f08330f1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205,PodSandboxId:d39a130850a3305fe58ff1962843f8f4abf944490777b24aa7bd64ee8f734a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402723536620550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-thtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3671b72-d562-4be2-9942-e971ee31b2c3,},Annotations:map[string]string{io.kubernetes.container.hash: 4e6a9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9,PodSandboxId:5d7fbc7f78bd27da40d11ae605c7c5545720800493c6651c0f3a24d40665dd5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402721164054403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shp9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cda240f2-977b-4318-9ee4-74f0090af489,},Annotations:map[string]string{io.kubernetes.container.hash: d10e2de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2,PodSandboxId:27d4c7691e43457e1dae6953ce7530ad9a019bf5ff5121dc0a25dfac10c95fc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402699250172747,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-253253,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: dece3072a963622363344a68ed68f60a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806,PodSandboxId:fe825b3fb7a8fab84c5cfcf27725b9039ef08f7add8c41908d01f9050c44bc5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402699278926824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-253253,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: fd3dad67cbb105bf1c12cfa4d77a5516,},Annotations:map[string]string{io.kubernetes.container.hash: 460b5609,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613,PodSandboxId:f4f214e73ec2525d4a6ed2b0a4f16328c717f1110c3d5ce773f0c67603c24bd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402698892453284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12f202cfa4431635b8e608b4139d09ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5,PodSandboxId:4c8572e0a6c42cf4a4f04757b8b3c240f6fddedc7403ae4b04dcb5ca209adc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402698987017299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be66e0dfe0c5d13f7ee475b7a4c8e76b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4db098a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a59196bd-a49c-45b1-9c97-251c6787ec18 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.338080294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=46809909-ad2f-4337-9a25-94ef3d94017a name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.338166126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=46809909-ad2f-4337-9a25-94ef3d94017a name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.338315661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0,PodSandboxId:e704b69630a14bc150790444bb9f5922934520bd59b741034b3af030dd3154bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402723944311961,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa05e7e5-87e7-43ac-af74-1c8a713b51c5,},Annotations:map[string]string{io.kubernetes.container.hash: f08330f1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205,PodSandboxId:d39a130850a3305fe58ff1962843f8f4abf944490777b24aa7bd64ee8f734a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402723536620550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-thtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3671b72-d562-4be2-9942-e971ee31b2c3,},Annotations:map[string]string{io.kubernetes.container.hash: 4e6a9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9,PodSandboxId:5d7fbc7f78bd27da40d11ae605c7c5545720800493c6651c0f3a24d40665dd5a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402721164054403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shp9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: cda240f2-977b-4318-9ee4-74f0090af489,},Annotations:map[string]string{io.kubernetes.container.hash: d10e2de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2,PodSandboxId:27d4c7691e43457e1dae6953ce7530ad9a019bf5ff5121dc0a25dfac10c95fc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402699250172747,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-253253,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: dece3072a963622363344a68ed68f60a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806,PodSandboxId:fe825b3fb7a8fab84c5cfcf27725b9039ef08f7add8c41908d01f9050c44bc5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402699278926824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-253253,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: fd3dad67cbb105bf1c12cfa4d77a5516,},Annotations:map[string]string{io.kubernetes.container.hash: 460b5609,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613,PodSandboxId:f4f214e73ec2525d4a6ed2b0a4f16328c717f1110c3d5ce773f0c67603c24bd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402698892453284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 12f202cfa4431635b8e608b4139d09ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5,PodSandboxId:4c8572e0a6c42cf4a4f04757b8b3c240f6fddedc7403ae4b04dcb5ca209adc7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402698987017299,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be66e0dfe0c5d13f7ee475b7a4c8e76b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4db098a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=46809909-ad2f-4337-9a25-94ef3d94017a name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.339246871Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=61ac623b-bb9d-456b-9826-855c7eb919d7 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.339379351Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1699402724082247926,StartedAt:1699402724124408525,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa05e7e5-87e7-43ac-af74-1c8a713b51c5,},Annotations:map[string]string{io.kubernetes.container.hash: f08330f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/fa05e7e5-87e7-43ac-af74-1c8a713b51c5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/fa05e7e5-87e7-43ac-af74-1c8a713b51c5/containers/storage-provisioner/2f5eb90b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/fa05e7e5-87e7-43ac-af74-1c8a713b51c5/volumes/kubernetes.io~projected/kube-api-access-5xr7p,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_fa05e7e5-87e7-43ac-af74-1c8a713b51c5/storage-provisioner/0
.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=61ac623b-bb9d-456b-9826-855c7eb919d7 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.339953418Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=22103065-0eb6-4a94-be64-86b3d573e4a9 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.340093406Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1699402723606730143,StartedAt:1699402723660746362,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-thtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3671b72-d562-4be2-9942-e971ee31b2c3,},Annotations:map[string]string{io.kubernetes.container.hash: 4e6a9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/a3671b72-d562-4be2-9942-e971ee31b2c3/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a3671b72-d562-4be2-9942-e971ee31b2c3/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a3671b72-d562-4be2-9942-e971ee31b2c3/containers/coredns/c4fe422b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/
lib/kubelet/pods/a3671b72-d562-4be2-9942-e971ee31b2c3/volumes/kubernetes.io~projected/kube-api-access-fnvgv,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-thtp4_a3671b72-d562-4be2-9942-e971ee31b2c3/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=22103065-0eb6-4a94-be64-86b3d573e4a9 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.341153496Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=e86d67b6-c35b-47e8-b646-e441a2e43e10 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.341309977Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1699402721571822368,StartedAt:1699402721755003069,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.28.3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shp9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cda240f2-977b-4318-9ee4-74f0090af489,},Annotations:map[string]string{io.kubernetes.container.hash: d10e2de,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/cda240f2-977b-4318-9ee4-74f0090af489/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/cda240f2-977b-4318-9ee4-74f0090af489/containers/kube-proxy/b1f953ab,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kubelet/pods/cda240f2-977b-4318-9ee4-74f0090af489/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/
serviceaccount,HostPath:/var/lib/kubelet/pods/cda240f2-977b-4318-9ee4-74f0090af489/volumes/kubernetes.io~projected/kube-api-access-q7vbw,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-proxy-shp9z_cda240f2-977b-4318-9ee4-74f0090af489/kube-proxy/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e86d67b6-c35b-47e8-b646-e441a2e43e10 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.341649288Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=c80140b6-2f1b-4d7f-a85d-e04436824596 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.341937714Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1699402699412827027,StartedAt:1699402700310865439,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.28.3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dece3072a963622363344a68ed68f60a,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/dece3072a963622363344a68ed68f60a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/dece3072a963622363344a68ed68f60a/containers/kube-controller-manager/3b87ac49,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRI
VATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-embed-certs-253253_dece3072a963622363344a68ed68f60a/kube-controller-manager/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=c80140b6-2f1b-4d7f-a85d-e04436824596 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.342254112Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=2f6bef9d-fbe3-4d10-af28-c26d1212afaf name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.342376165Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1699402699384844878,StartedAt:1699402700184102119,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd3dad67cbb105bf1c12cfa4d77a5516,},Annotations:map[string]string{io.kubernetes.container.hash: 460b5609,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/fd3dad67cbb105bf1c12cfa4d77a5516/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/fd3dad67cbb105bf1c12cfa4d77a5516/containers/kube-apiserver/c41bb565,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-253253_fd3dad67c
bb105bf1c12cfa4d77a5516/kube-apiserver/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=2f6bef9d-fbe3-4d10-af28-c26d1212afaf name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.342651185Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=83bc8814-e701-4965-b085-80b8e23c1177 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.342819356Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1699402699244159374,StartedAt:1699402700899165646,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.28.3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f202cfa4431635b8e608b4139d09ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/12f202cfa4431635b8e608b4139d09ff/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/12f202cfa4431635b8e608b4139d09ff/containers/kube-scheduler/5325cc60,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-embed-certs-253253_12f202cfa4431635b8e608b4139d09ff/kube-scheduler/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=83bc8814-e701-4965-b085-80b8e23c1177 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.343081664Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=d865dba4-f9fe-486b-9977-ee816715f77c name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 08 00:35:38 embed-certs-253253 crio[727]: time="2023-11-08 00:35:38.343187462Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1699402699212674349,StartedAt:1699402700937921907,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-253253,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be66e0dfe0c5d13f7ee475b7a4c8e76b,},Annotations:map[string]string{io.kubernetes.container.hash: 4db098a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/be66e0dfe0c5d13f7ee475b7a4c8e76b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/be66e0dfe0c5d13f7ee475b7a4c8e76b/containers/etcd/cbbbd3c5,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-embed-certs-253253_be66e0dfe0c5d13f7ee475b7a4c8e76b/etcd/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=d865dba4-f9fe-486b-9977-ee816715f77c name=/runtime.v1.RuntimeService/ContainerStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a448430e616f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   e704b69630a14       storage-provisioner
	610e49643c614       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   d39a130850a33       coredns-5dd5756b68-thtp4
	145fd37d0c214       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   16 minutes ago      Running             kube-proxy                0                   5d7fbc7f78bd2       kube-proxy-shp9z
	d6041bef5c201       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   17 minutes ago      Running             kube-apiserver            2                   fe825b3fb7a8f       kube-apiserver-embed-certs-253253
	839d5e12d3d5b       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   17 minutes ago      Running             kube-controller-manager   2                   27d4c7691e434       kube-controller-manager-embed-certs-253253
	a932303ed94d4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   17 minutes ago      Running             etcd                      2                   4c8572e0a6c42       etcd-embed-certs-253253
	d2f07ee7e14c7       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   17 minutes ago      Running             kube-scheduler            2                   f4f214e73ec25       kube-scheduler-embed-certs-253253
	
	* 
	* ==> coredns [610e49643c61470c996765677777c742caa805c0ba22eeec80e58174b6944205] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48227 - 57258 "HINFO IN 5919024392424834459.2329518990281447896. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015428028s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-253253
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-253253
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=embed-certs-253253
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T00_18_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 00:18:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-253253
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 00:35:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:34:07 +0000   Wed, 08 Nov 2023 00:18:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:34:07 +0000   Wed, 08 Nov 2023 00:18:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:34:07 +0000   Wed, 08 Nov 2023 00:18:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:34:07 +0000   Wed, 08 Nov 2023 00:18:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    embed-certs-253253
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a10ee08b8f4f4452abe24ecfc389bc9c
	  System UUID:                a10ee08b-8f4f-4452-abe2-4ecfc389bc9c
	  Boot ID:                    9f9d89ce-b341-40b8-9f1b-fd7bd7add76a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-thtp4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-253253                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-embed-certs-253253             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-embed-certs-253253    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-shp9z                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-253253             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-57f55c9bc5-f8rk4               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node embed-certs-253253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node embed-certs-253253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node embed-certs-253253 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             17m   kubelet          Node embed-certs-253253 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                17m   kubelet          Node embed-certs-253253 status is now: NodeReady
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-253253 event: Registered Node embed-certs-253253 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 8 00:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067340] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.413433] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.604799] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142679] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.461087] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.051861] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.112114] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.153549] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.134252] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.235283] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +17.184635] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +20.677236] kauditd_printk_skb: 34 callbacks suppressed
	[Nov 8 00:18] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.550389] systemd-fstab-generator[3734]: Ignoring "noauto" for root device
	[  +9.808129] systemd-fstab-generator[4059]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [a932303ed94d4eb247039a36dac42ce63a4506fe9af9bff104234376c9ec2ea5] <==
	* {"level":"info","ts":"2023-11-08T00:18:21.50027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:21.500819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:21.506069Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T00:18:21.506854Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:21.506972Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:21.507109Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.159:2379"}
	{"level":"info","ts":"2023-11-08T00:18:21.507272Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:21.532573Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:21.532871Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:21.532954Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:28:21.856873Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":678}
	{"level":"info","ts":"2023-11-08T00:28:21.859569Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":678,"took":"2.266298ms","hash":1989073171}
	{"level":"info","ts":"2023-11-08T00:28:21.859637Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1989073171,"revision":678,"compact-revision":-1}
	{"level":"info","ts":"2023-11-08T00:32:55.461868Z","caller":"traceutil/trace.go:171","msg":"trace[1769560584] linearizableReadLoop","detail":"{readStateIndex:1333; appliedIndex:1332; }","duration":"278.668851ms","start":"2023-11-08T00:32:55.183153Z","end":"2023-11-08T00:32:55.461822Z","steps":["trace[1769560584] 'read index received'  (duration: 278.336584ms)","trace[1769560584] 'applied index is now lower than readState.Index'  (duration: 330.761µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-08T00:32:55.462287Z","caller":"traceutil/trace.go:171","msg":"trace[2049875814] transaction","detail":"{read_only:false; response_revision:1144; number_of_response:1; }","duration":"282.383364ms","start":"2023-11-08T00:32:55.17988Z","end":"2023-11-08T00:32:55.462264Z","steps":["trace[2049875814] 'process raft request'  (duration: 281.67892ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T00:32:55.462502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.271809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-11-08T00:32:55.462634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.952696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.159\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2023-11-08T00:32:55.462787Z","caller":"traceutil/trace.go:171","msg":"trace[609574164] range","detail":"{range_begin:/registry/masterleases/192.168.39.159; range_end:; response_count:1; response_revision:1144; }","duration":"136.11522ms","start":"2023-11-08T00:32:55.326662Z","end":"2023-11-08T00:32:55.462778Z","steps":["trace[609574164] 'agreement among raft nodes before linearized reading'  (duration: 135.916224ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:32:55.462666Z","caller":"traceutil/trace.go:171","msg":"trace[1941903409] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1144; }","duration":"279.507431ms","start":"2023-11-08T00:32:55.183134Z","end":"2023-11-08T00:32:55.462641Z","steps":["trace[1941903409] 'agreement among raft nodes before linearized reading'  (duration: 279.187317ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T00:32:55.708592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.068023ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5093443275694095196 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:46af8bac4b593b5b>","response":"size:41"}
	{"level":"info","ts":"2023-11-08T00:33:21.864247Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2023-11-08T00:33:21.866199Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":922,"took":"1.52831ms","hash":2549455889}
	{"level":"info","ts":"2023-11-08T00:33:21.866276Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2549455889,"revision":922,"compact-revision":678}
	{"level":"warn","ts":"2023-11-08T00:34:05.536192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.53959ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5093443275694095553 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.159\" mod_revision:1194 > success:<request_put:<key:\"/registry/masterleases/192.168.39.159\" value_size:67 lease:5093443275694095551 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.159\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-08T00:34:05.536489Z","caller":"traceutil/trace.go:171","msg":"trace[538002959] transaction","detail":"{read_only:false; response_revision:1202; number_of_response:1; }","duration":"186.96138ms","start":"2023-11-08T00:34:05.349495Z","end":"2023-11-08T00:34:05.536456Z","steps":["trace[538002959] 'process raft request'  (duration: 62.279738ms)","trace[538002959] 'compare'  (duration: 123.389497ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  00:35:39 up 22 min,  0 users,  load average: 0.40, 0.21, 0.21
	Linux embed-certs-253253 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d6041bef5c201bdca4a81bfb77a4d5f2c2d045393f3f7f8194d55cb1b7a3c806] <==
	* E1108 00:33:24.565772       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:33:24.566669       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1108 00:33:33.732940       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:33:43.733984       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:33:53.734391       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:34:03.735109       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:34:13.736061       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	I1108 00:34:23.445652       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 00:34:23.736894       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	W1108 00:34:24.566054       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:34:24.566273       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:34:24.566327       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:34:24.567282       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:34:24.567387       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:34:24.567401       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1108 00:34:33.737442       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:34:43.738288       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:34:53.739199       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:35:03.740783       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:35:13.742177       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	I1108 00:35:23.444601       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 00:35:23.743147       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1108 00:35:33.743551       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	
	* 
	* ==> kube-controller-manager [839d5e12d3d5b0d8a803affe356b49fd782c553f882b0a29ac546df2e09ebee2] <==
	* I1108 00:30:03.443036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="108.362µs"
	E1108 00:30:09.972982       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:30:10.479611       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:30:39.979576       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:30:40.488953       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:31:09.986987       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:31:10.497366       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:31:39.993552       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:31:40.505616       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:32:09.999631       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:32:10.514035       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:32:40.006544       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:32:40.522650       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:33:10.014839       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:33:10.537249       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:33:40.021807       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:33:40.545926       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:34:10.029890       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:34:10.554490       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:34:40.036061       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:34:40.566279       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:34:51.449203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="326.095µs"
	I1108 00:35:03.454282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="135.76µs"
	E1108 00:35:10.046809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:35:10.575334       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [145fd37d0c2140dc51a3911cb49bc3c8a6f67577994c48358fa4a03d43a60fa9] <==
	* I1108 00:18:42.179931       1 server_others.go:69] "Using iptables proxy"
	I1108 00:18:42.219908       1 node.go:141] Successfully retrieved node IP: 192.168.39.159
	I1108 00:18:42.392604       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 00:18:42.392649       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 00:18:42.416538       1 server_others.go:152] "Using iptables Proxier"
	I1108 00:18:42.416661       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 00:18:42.416914       1 server.go:846] "Version info" version="v1.28.3"
	I1108 00:18:42.416924       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:18:42.426618       1 config.go:188] "Starting service config controller"
	I1108 00:18:42.427411       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 00:18:42.427444       1 config.go:97] "Starting endpoint slice config controller"
	I1108 00:18:42.427450       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 00:18:42.438847       1 config.go:315] "Starting node config controller"
	I1108 00:18:42.439832       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 00:18:42.530664       1 shared_informer.go:318] Caches are synced for service config
	I1108 00:18:42.531037       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 00:18:42.546914       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d2f07ee7e14c7c0e3cbf1a7524433aa1920f3779093dc1b7c8ea38deb6087613] <==
	* W1108 00:18:23.672580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:18:23.674815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 00:18:23.672886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 00:18:23.672900       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:23.675141       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:23.675164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 00:18:24.525289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:18:24.525430       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 00:18:24.530357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:24.530425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:24.650381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:24.650448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:24.673433       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 00:18:24.673483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1108 00:18:24.773445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:18:24.773531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 00:18:24.796652       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:24.796819       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 00:18:24.856914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:24.857037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:24.900015       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:18:24.900127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 00:18:24.948304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1108 00:18:24.948413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1108 00:18:27.952552       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 00:13:11 UTC, ends at Wed 2023-11-08 00:35:39 UTC. --
	Nov 08 00:33:27 embed-certs-253253 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:33:27 embed-certs-253253 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:33:27 embed-certs-253253 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:33:27 embed-certs-253253 kubelet[4066]: E1108 00:33:27.549322    4066 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Nov 08 00:33:36 embed-certs-253253 kubelet[4066]: E1108 00:33:36.424276    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:33:51 embed-certs-253253 kubelet[4066]: E1108 00:33:51.424486    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:34:02 embed-certs-253253 kubelet[4066]: E1108 00:34:02.424608    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:34:14 embed-certs-253253 kubelet[4066]: E1108 00:34:14.425061    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:34:27 embed-certs-253253 kubelet[4066]: E1108 00:34:27.428444    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:34:27 embed-certs-253253 kubelet[4066]: E1108 00:34:27.519380    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:34:27 embed-certs-253253 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:34:27 embed-certs-253253 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:34:27 embed-certs-253253 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:34:38 embed-certs-253253 kubelet[4066]: E1108 00:34:38.434983    4066 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 08 00:34:38 embed-certs-253253 kubelet[4066]: E1108 00:34:38.435030    4066 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 08 00:34:38 embed-certs-253253 kubelet[4066]: E1108 00:34:38.435212    4066 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gg9cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-f8rk4_kube-system(927cc877-7a22-47e3-b666-1adf0cc1b5c6): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 08 00:34:38 embed-certs-253253 kubelet[4066]: E1108 00:34:38.435248    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:34:51 embed-certs-253253 kubelet[4066]: E1108 00:34:51.426971    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:35:03 embed-certs-253253 kubelet[4066]: E1108 00:35:03.428432    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:35:16 embed-certs-253253 kubelet[4066]: E1108 00:35:16.425887    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:35:27 embed-certs-253253 kubelet[4066]: E1108 00:35:27.423839    4066 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-f8rk4" podUID="927cc877-7a22-47e3-b666-1adf0cc1b5c6"
	Nov 08 00:35:27 embed-certs-253253 kubelet[4066]: E1108 00:35:27.515022    4066 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:35:27 embed-certs-253253 kubelet[4066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:35:27 embed-certs-253253 kubelet[4066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:35:27 embed-certs-253253 kubelet[4066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [5a448430e616fc8bce8ccd852cfd4f69e5b6cf66566029824b39b1b7ec72f5d0] <==
	* I1108 00:18:44.198187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 00:18:44.214219       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 00:18:44.214361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 00:18:44.224253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 00:18:44.226326       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bdb638a5-4eef-4712-a557-6b799a37a79b", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-253253_a66df68b-21c0-4eba-863c-c8c2003b7d9a became leader
	I1108 00:18:44.227012       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-253253_a66df68b-21c0-4eba-863c-c8c2003b7d9a!
	I1108 00:18:44.328221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-253253_a66df68b-21c0-4eba-863c-c8c2003b7d9a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-253253 -n embed-certs-253253
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-253253 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-f8rk4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-253253 describe pod metrics-server-57f55c9bc5-f8rk4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-253253 describe pod metrics-server-57f55c9bc5-f8rk4: exit status 1 (78.530457ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-f8rk4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-253253 describe pod metrics-server-57f55c9bc5-f8rk4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (469.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (400.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-320390 -n no-preload-320390
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-08 00:34:28.591741532 +0000 UTC m=+5602.755050417
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-320390 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-320390 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.571µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-320390 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320390 -n no-preload-320390
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-320390 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-320390 logs -n 25: (1.323949094s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-253253            | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-560216 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	|         | disable-driver-mounts-560216                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:09 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-590541             | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320390                  | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-253253                 | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-039263  | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-039263       | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:12 UTC | 08 Nov 23 00:19 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:32 UTC | 08 Nov 23 00:32 UTC |
	| start   | -p newest-cni-409933 --memory=2200 --alsologtostderr   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:32 UTC | 08 Nov 23 00:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-409933             | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:33 UTC | 08 Nov 23 00:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-409933                                   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:33 UTC | 08 Nov 23 00:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-409933                  | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:33 UTC | 08 Nov 23 00:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-409933 --memory=2200 --alsologtostderr   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:33 UTC | 08 Nov 23 00:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-409933 sudo                              | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC | 08 Nov 23 00:34 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-409933                                   | newest-cni-409933            | jenkins | v1.32.0 | 08 Nov 23 00:34 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:33:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:33:35.163988   56540 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:33:35.164158   56540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:33:35.164169   56540 out.go:309] Setting ErrFile to fd 2...
	I1108 00:33:35.164177   56540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:33:35.164424   56540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:33:35.165055   56540 out.go:303] Setting JSON to false
	I1108 00:33:35.165957   56540 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8164,"bootTime":1699395451,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:33:35.166012   56540 start.go:138] virtualization: kvm guest
	I1108 00:33:35.168150   56540 out.go:177] * [newest-cni-409933] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:33:35.169845   56540 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:33:35.171152   56540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:33:35.169854   56540 notify.go:220] Checking for updates...
	I1108 00:33:35.172543   56540 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:33:35.173781   56540 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:33:35.175103   56540 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:33:35.176504   56540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:33:35.178249   56540 config.go:182] Loaded profile config "newest-cni-409933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:33:35.178722   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:33:35.178771   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:33:35.193067   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34201
	I1108 00:33:35.193488   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:33:35.194035   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:33:35.194059   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:33:35.194353   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:33:35.194557   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:33:35.194782   56540 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:33:35.195136   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:33:35.195188   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:33:35.209258   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I1108 00:33:35.209630   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:33:35.210090   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:33:35.210123   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:33:35.210421   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:33:35.210578   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:33:35.245104   56540 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:33:35.246551   56540 start.go:298] selected driver: kvm2
	I1108 00:33:35.246572   56540 start.go:902] validating driver "kvm2" against &{Name:newest-cni-409933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-409
933 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.8 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:33:35.246691   56540 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:33:35.247315   56540 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:33:35.247383   56540 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:33:35.262259   56540 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:33:35.262714   56540 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 00:33:35.262777   56540 cni.go:84] Creating CNI manager for ""
	I1108 00:33:35.262790   56540 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:33:35.262801   56540 start_flags.go:323] config:
	{Name:newest-cni-409933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-409933 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.8 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fals
e ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:33:35.262961   56540 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:33:35.264713   56540 out.go:177] * Starting control plane node newest-cni-409933 in cluster newest-cni-409933
	I1108 00:33:35.266040   56540 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:33:35.266079   56540 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1108 00:33:35.266092   56540 cache.go:56] Caching tarball of preloaded images
	I1108 00:33:35.266179   56540 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 00:33:35.266194   56540 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1108 00:33:35.266343   56540 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933/config.json ...
	I1108 00:33:35.266562   56540 start.go:365] acquiring machines lock for newest-cni-409933: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:33:35.266610   56540 start.go:369] acquired machines lock for "newest-cni-409933" in 28.437µs
	I1108 00:33:35.266630   56540 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:33:35.266639   56540 fix.go:54] fixHost starting: 
	I1108 00:33:35.266982   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:33:35.267026   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:33:35.281228   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1108 00:33:35.281644   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:33:35.282095   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:33:35.282122   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:33:35.282462   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:33:35.282666   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:33:35.282824   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetState
	I1108 00:33:35.284378   56540 fix.go:102] recreateIfNeeded on newest-cni-409933: state=Stopped err=<nil>
	I1108 00:33:35.284431   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	W1108 00:33:35.284605   56540 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:33:35.286407   56540 out.go:177] * Restarting existing kvm2 VM for "newest-cni-409933" ...
	I1108 00:33:35.287786   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Start
	I1108 00:33:35.287941   56540 main.go:141] libmachine: (newest-cni-409933) Ensuring networks are active...
	I1108 00:33:35.288716   56540 main.go:141] libmachine: (newest-cni-409933) Ensuring network default is active
	I1108 00:33:35.289240   56540 main.go:141] libmachine: (newest-cni-409933) Ensuring network mk-newest-cni-409933 is active
	I1108 00:33:35.289587   56540 main.go:141] libmachine: (newest-cni-409933) Getting domain xml...
	I1108 00:33:35.290340   56540 main.go:141] libmachine: (newest-cni-409933) Creating domain...
	I1108 00:33:36.594411   56540 main.go:141] libmachine: (newest-cni-409933) Waiting to get IP...
	I1108 00:33:36.595303   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:36.595708   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:36.595764   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:36.595687   56574 retry.go:31] will retry after 294.484962ms: waiting for machine to come up
	I1108 00:33:36.892312   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:36.892856   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:36.892875   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:36.892792   56574 retry.go:31] will retry after 284.169544ms: waiting for machine to come up
	I1108 00:33:37.178258   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:37.178736   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:37.178802   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:37.178661   56574 retry.go:31] will retry after 330.85581ms: waiting for machine to come up
	I1108 00:33:37.510994   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:37.511484   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:37.511510   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:37.511433   56574 retry.go:31] will retry after 571.956452ms: waiting for machine to come up
	I1108 00:33:38.085081   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:38.085544   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:38.085574   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:38.085485   56574 retry.go:31] will retry after 720.902327ms: waiting for machine to come up
	I1108 00:33:38.808408   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:38.808903   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:38.808941   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:38.808839   56574 retry.go:31] will retry after 659.002475ms: waiting for machine to come up
	I1108 00:33:39.469738   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:39.470248   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:39.470307   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:39.470209   56574 retry.go:31] will retry after 745.0312ms: waiting for machine to come up
	I1108 00:33:40.217206   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:40.217609   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:40.217634   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:40.217581   56574 retry.go:31] will retry after 1.415225391s: waiting for machine to come up
	I1108 00:33:41.634026   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:41.634416   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:41.634449   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:41.634367   56574 retry.go:31] will retry after 1.463944902s: waiting for machine to come up
	I1108 00:33:43.100344   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:43.100782   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:43.100810   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:43.100737   56574 retry.go:31] will retry after 1.685194577s: waiting for machine to come up
	I1108 00:33:44.787324   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:44.787861   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:44.787895   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:44.787793   56574 retry.go:31] will retry after 1.846892422s: waiting for machine to come up
	I1108 00:33:46.636865   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:46.637338   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:46.637371   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:46.637273   56574 retry.go:31] will retry after 3.262475409s: waiting for machine to come up
	I1108 00:33:49.903327   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:49.903866   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:49.903893   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:49.903829   56574 retry.go:31] will retry after 3.132216136s: waiting for machine to come up
	I1108 00:33:53.039527   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:53.040024   56540 main.go:141] libmachine: (newest-cni-409933) DBG | unable to find current IP address of domain newest-cni-409933 in network mk-newest-cni-409933
	I1108 00:33:53.040043   56540 main.go:141] libmachine: (newest-cni-409933) DBG | I1108 00:33:53.039985   56574 retry.go:31] will retry after 3.919010817s: waiting for machine to come up
	I1108 00:33:56.960867   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:56.961411   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has current primary IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:56.961446   56540 main.go:141] libmachine: (newest-cni-409933) Found IP for machine: 192.168.50.8
	I1108 00:33:56.961469   56540 main.go:141] libmachine: (newest-cni-409933) Reserving static IP address...
	I1108 00:33:56.961856   56540 main.go:141] libmachine: (newest-cni-409933) Reserved static IP address: 192.168.50.8
	I1108 00:33:56.961885   56540 main.go:141] libmachine: (newest-cni-409933) Waiting for SSH to be available...
	I1108 00:33:56.961909   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "newest-cni-409933", mac: "52:54:00:46:8b:78", ip: "192.168.50.8"} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:56.961947   56540 main.go:141] libmachine: (newest-cni-409933) DBG | skip adding static IP to network mk-newest-cni-409933 - found existing host DHCP lease matching {name: "newest-cni-409933", mac: "52:54:00:46:8b:78", ip: "192.168.50.8"}
	I1108 00:33:56.961975   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Getting to WaitForSSH function...
	I1108 00:33:56.964084   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:56.964394   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:56.964429   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:56.964526   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Using SSH client type: external
	I1108 00:33:56.964565   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa (-rw-------)
	I1108 00:33:56.964620   56540 main.go:141] libmachine: (newest-cni-409933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:33:56.964644   56540 main.go:141] libmachine: (newest-cni-409933) DBG | About to run SSH command:
	I1108 00:33:56.964661   56540 main.go:141] libmachine: (newest-cni-409933) DBG | exit 0
	I1108 00:33:57.060617   56540 main.go:141] libmachine: (newest-cni-409933) DBG | SSH cmd err, output: <nil>: 
	I1108 00:33:57.061016   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetConfigRaw
	I1108 00:33:57.061677   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetIP
	I1108 00:33:57.064070   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.064463   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:57.064488   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.064722   56540 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933/config.json ...
	I1108 00:33:57.064962   56540 machine.go:88] provisioning docker machine ...
	I1108 00:33:57.064986   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:33:57.065193   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetMachineName
	I1108 00:33:57.065367   56540 buildroot.go:166] provisioning hostname "newest-cni-409933"
	I1108 00:33:57.065392   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetMachineName
	I1108 00:33:57.065569   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:33:57.068236   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.068584   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:57.068612   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.068745   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:33:57.068929   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:57.069093   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:57.069231   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:33:57.069388   56540 main.go:141] libmachine: Using SSH client type: native
	I1108 00:33:57.069736   56540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.8 22 <nil> <nil>}
	I1108 00:33:57.069756   56540 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-409933 && echo "newest-cni-409933" | sudo tee /etc/hostname
	I1108 00:33:57.209950   56540 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-409933
	
	I1108 00:33:57.209984   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:33:57.212950   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.213252   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:57.213283   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.213404   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:33:57.213604   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:57.213797   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:57.213990   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:33:57.214154   56540 main.go:141] libmachine: Using SSH client type: native
	I1108 00:33:57.214522   56540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.8 22 <nil> <nil>}
	I1108 00:33:57.214546   56540 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-409933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-409933/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-409933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:33:57.345001   56540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:33:57.345028   56540 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:33:57.345054   56540 buildroot.go:174] setting up certificates
	I1108 00:33:57.345069   56540 provision.go:83] configureAuth start
	I1108 00:33:57.345081   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetMachineName
	I1108 00:33:57.345373   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetIP
	I1108 00:33:57.348278   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.348728   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:57.348755   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.348882   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:33:57.351179   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.351537   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:57.351565   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.351646   56540 provision.go:138] copyHostCerts
	I1108 00:33:57.351695   56540 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:33:57.351716   56540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:33:57.351793   56540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:33:57.351956   56540 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:33:57.351972   56540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:33:57.352016   56540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:33:57.352106   56540 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:33:57.352116   56540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:33:57.352146   56540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:33:57.352191   56540 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.newest-cni-409933 san=[192.168.50.8 192.168.50.8 localhost 127.0.0.1 minikube newest-cni-409933]
	I1108 00:33:57.488876   56540 provision.go:172] copyRemoteCerts
	I1108 00:33:57.488940   56540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:33:57.488979   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:33:57.491531   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.491829   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:57.491863   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.492020   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:33:57.492208   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:57.492357   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:33:57.492506   56540 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa Username:docker}
	I1108 00:33:57.586588   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:33:57.610580   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:33:57.634264   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:33:57.658170   56540 provision.go:86] duration metric: configureAuth took 313.090774ms
	I1108 00:33:57.658204   56540 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:33:57.658406   56540 config.go:182] Loaded profile config "newest-cni-409933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:33:57.658473   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:33:57.660996   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.661388   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:57.661430   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:57.661540   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:33:57.661723   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:57.661926   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:57.662076   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:33:57.662247   56540 main.go:141] libmachine: Using SSH client type: native
	I1108 00:33:57.662549   56540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.8 22 <nil> <nil>}
	I1108 00:33:57.662564   56540 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:33:58.007884   56540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:33:58.007907   56540 machine.go:91] provisioned docker machine in 942.930146ms
	I1108 00:33:58.007917   56540 start.go:300] post-start starting for "newest-cni-409933" (driver="kvm2")
	I1108 00:33:58.007927   56540 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:33:58.007947   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:33:58.008261   56540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:33:58.008292   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:33:58.011473   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.011874   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:58.011905   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.012093   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:33:58.012303   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:58.012481   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:33:58.012670   56540 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa Username:docker}
	I1108 00:33:58.108230   56540 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:33:58.112787   56540 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:33:58.112808   56540 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:33:58.112895   56540 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:33:58.112978   56540 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:33:58.113093   56540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:33:58.122880   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:33:58.145112   56540 start.go:303] post-start completed in 137.183478ms
	I1108 00:33:58.145131   56540 fix.go:56] fixHost completed within 22.878492432s
	I1108 00:33:58.145148   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:33:58.147806   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.148161   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:58.148194   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.148372   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:33:58.148563   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:58.148735   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:58.148900   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:33:58.149128   56540 main.go:141] libmachine: Using SSH client type: native
	I1108 00:33:58.149432   56540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.8 22 <nil> <nil>}
	I1108 00:33:58.149443   56540 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:33:58.281961   56540 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699403638.265261773
	
	I1108 00:33:58.281983   56540 fix.go:206] guest clock: 1699403638.265261773
	I1108 00:33:58.281992   56540 fix.go:219] Guest: 2023-11-08 00:33:58.265261773 +0000 UTC Remote: 2023-11-08 00:33:58.145134379 +0000 UTC m=+23.034221327 (delta=120.127394ms)
	I1108 00:33:58.282035   56540 fix.go:190] guest clock delta is within tolerance: 120.127394ms
	I1108 00:33:58.282065   56540 start.go:83] releasing machines lock for "newest-cni-409933", held for 23.015424928s
	I1108 00:33:58.282090   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:33:58.282333   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetIP
	I1108 00:33:58.285301   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.285807   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:58.285837   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.286058   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:33:58.286524   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:33:58.286724   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:33:58.286809   56540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:33:58.286850   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:33:58.286919   56540 ssh_runner.go:195] Run: cat /version.json
	I1108 00:33:58.286950   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:33:58.289435   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.289799   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.289835   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:58.289859   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.290007   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:33:58.290221   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:58.290252   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:58.290290   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:58.290403   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:33:58.290463   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:33:58.290563   56540 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa Username:docker}
	I1108 00:33:58.290656   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:33:58.290795   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:33:58.290955   56540 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa Username:docker}
	I1108 00:33:58.406091   56540 ssh_runner.go:195] Run: systemctl --version
	I1108 00:33:58.412204   56540 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:33:58.557965   56540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:33:58.563692   56540 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:33:58.563756   56540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:33:58.578214   56540 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:33:58.578237   56540 start.go:472] detecting cgroup driver to use...
	I1108 00:33:58.578304   56540 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:33:58.593439   56540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:33:58.605676   56540 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:33:58.605735   56540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:33:58.618412   56540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:33:58.632247   56540 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:33:58.743569   56540 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:33:58.862956   56540 docker.go:219] disabling docker service ...
	I1108 00:33:58.863045   56540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:33:58.876533   56540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:33:58.888564   56540 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:33:59.011604   56540 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:33:59.132359   56540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:33:59.145040   56540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:33:59.162538   56540 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:33:59.162613   56540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:33:59.171843   56540 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:33:59.171905   56540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:33:59.181150   56540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:33:59.192163   56540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:33:59.201402   56540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:33:59.210432   56540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:33:59.218173   56540 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:33:59.218230   56540 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:33:59.230266   56540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:33:59.240407   56540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:33:59.363712   56540 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:33:59.537107   56540 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:33:59.537179   56540 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:33:59.542452   56540 start.go:540] Will wait 60s for crictl version
	I1108 00:33:59.542507   56540 ssh_runner.go:195] Run: which crictl
	I1108 00:33:59.546028   56540 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:33:59.592654   56540 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:33:59.592731   56540 ssh_runner.go:195] Run: crio --version
	I1108 00:33:59.646648   56540 ssh_runner.go:195] Run: crio --version
	I1108 00:33:59.701455   56540 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:33:59.702826   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetIP
	I1108 00:33:59.705617   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:59.705944   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:33:59.705977   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:33:59.706158   56540 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1108 00:33:59.710614   56540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:33:59.726999   56540 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 00:33:59.728277   56540 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:33:59.728340   56540 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:33:59.773545   56540 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:33:59.773627   56540 ssh_runner.go:195] Run: which lz4
	I1108 00:33:59.777921   56540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:33:59.782168   56540 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:33:59.782196   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:34:01.578576   56540 crio.go:444] Took 1.800697 seconds to copy over tarball
	I1108 00:34:01.578648   56540 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:34:04.645252   56540 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.066565229s)
	I1108 00:34:04.645283   56540 crio.go:451] Took 3.066680 seconds to extract the tarball
	I1108 00:34:04.645292   56540 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:34:04.686097   56540 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:34:04.737398   56540 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:34:04.737424   56540 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:34:04.737530   56540 ssh_runner.go:195] Run: crio config
	I1108 00:34:04.799583   56540 cni.go:84] Creating CNI manager for ""
	I1108 00:34:04.799602   56540 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:34:04.799623   56540 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1108 00:34:04.799641   56540 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.8 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-409933 NodeName:newest-cni-409933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[]
NodeIP:192.168.50.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:34:04.799762   56540 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-409933"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:34:04.799818   56540 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-409933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-409933 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:34:04.799871   56540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:34:04.808896   56540 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:34:04.808958   56540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:34:04.820611   56540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (412 bytes)
	I1108 00:34:04.837708   56540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:34:04.855328   56540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1108 00:34:04.872736   56540 ssh_runner.go:195] Run: grep 192.168.50.8	control-plane.minikube.internal$ /etc/hosts
	I1108 00:34:04.876566   56540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:34:04.888940   56540 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933 for IP: 192.168.50.8
	I1108 00:34:04.888970   56540 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:34:04.889127   56540 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:34:04.889275   56540 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:34:04.889405   56540 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933/client.key
	I1108 00:34:04.889498   56540 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933/apiserver.key.28b7601b
	I1108 00:34:04.889563   56540 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933/proxy-client.key
	I1108 00:34:04.889670   56540 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:34:04.889704   56540 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:34:04.889714   56540 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:34:04.889739   56540 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:34:04.889761   56540 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:34:04.889786   56540 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:34:04.889827   56540 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:34:04.890393   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:34:04.916530   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 00:34:04.941553   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:34:04.970268   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/newest-cni-409933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:34:04.996626   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:34:05.021490   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:34:05.046155   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:34:05.072207   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:34:05.097287   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:34:05.119992   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:34:05.142315   56540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:34:05.167122   56540 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:34:05.285254   56540 ssh_runner.go:195] Run: openssl version
	I1108 00:34:05.291497   56540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:34:05.302997   56540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:34:05.308033   56540 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:34:05.308092   56540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:34:05.313558   56540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:34:05.323419   56540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:34:05.334041   56540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:34:05.339124   56540 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:34:05.339180   56540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:34:05.344935   56540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:34:05.355886   56540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:34:05.368978   56540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:34:05.373554   56540 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:34:05.373602   56540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:34:05.379574   56540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:34:05.389571   56540 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:34:05.394564   56540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:34:05.400775   56540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:34:05.407068   56540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:34:05.413094   56540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:34:05.419437   56540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:34:05.425385   56540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:34:05.431380   56540 kubeadm.go:404] StartCluster: {Name:newest-cni-409933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-409933 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.8 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:34:05.431488   56540 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:34:05.431541   56540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:34:05.470727   56540 cri.go:89] found id: ""
	I1108 00:34:05.470814   56540 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:34:05.482049   56540 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:34:05.482079   56540 kubeadm.go:636] restartCluster start
	I1108 00:34:05.482141   56540 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:34:05.492014   56540 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:05.493069   56540 kubeconfig.go:135] verify returned: extract IP: "newest-cni-409933" does not appear in /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:34:05.493865   56540 kubeconfig.go:146] "newest-cni-409933" context is missing from /home/jenkins/minikube-integration/17585-9647/kubeconfig - will repair!
	I1108 00:34:05.495036   56540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:34:05.544430   56540 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:34:05.555997   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:05.556059   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:05.568201   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:05.568224   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:05.568271   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:05.580268   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:06.080947   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:06.081050   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:06.093718   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:06.581265   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:06.581362   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:06.593745   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:07.081384   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:07.081458   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:07.094648   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:07.581261   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:07.581348   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:07.593225   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:08.080744   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:08.080836   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:08.093237   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:08.580668   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:08.580737   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:08.593308   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:09.080855   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:09.080964   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:09.093166   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:09.580700   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:09.580804   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:09.593277   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:10.080415   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:10.080486   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:10.093989   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:10.580754   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:10.580862   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:10.593642   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:11.080747   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:11.080853   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:11.092993   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:11.580566   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:11.580651   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:11.592598   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:12.081032   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:12.081129   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:12.093750   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:12.580941   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:12.581016   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:12.594281   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:13.080729   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:13.080838   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:13.093646   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:13.581268   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:13.581383   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:13.593586   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:14.081253   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:14.081343   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:14.093497   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:14.581056   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:14.581136   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:14.594090   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:15.080461   56540 api_server.go:166] Checking apiserver status ...
	I1108 00:34:15.080570   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:34:15.093197   56540 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:34:15.556335   56540 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:34:15.556365   56540 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:34:15.556388   56540 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:34:15.556449   56540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:34:15.601922   56540 cri.go:89] found id: ""
	I1108 00:34:15.602010   56540 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:34:15.619021   56540 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:34:15.628556   56540 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:34:15.628611   56540 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:34:15.637693   56540 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:34:15.637713   56540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:34:15.769662   56540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:34:16.735304   56540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:34:16.939161   56540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:34:17.026371   56540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:34:17.105187   56540 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:34:17.105273   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:34:17.121465   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:34:17.632719   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:34:18.133337   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:34:18.633616   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:34:19.133532   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:34:19.632674   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:34:19.655533   56540 api_server.go:72] duration metric: took 2.550344119s to wait for apiserver process to appear ...
	I1108 00:34:19.655556   56540 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:34:19.655569   56540 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I1108 00:34:23.431037   56540 api_server.go:279] https://192.168.50.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:34:23.431062   56540 api_server.go:103] status: https://192.168.50.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:34:23.431073   56540 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I1108 00:34:23.463780   56540 api_server.go:279] https://192.168.50.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:34:23.463811   56540 api_server.go:103] status: https://192.168.50.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:34:23.964063   56540 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I1108 00:34:23.970784   56540 api_server.go:279] https://192.168.50.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:34:23.970814   56540 api_server.go:103] status: https://192.168.50.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:34:24.464315   56540 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I1108 00:34:24.469663   56540 api_server.go:279] https://192.168.50.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:34:24.469688   56540 api_server.go:103] status: https://192.168.50.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:34:24.963958   56540 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I1108 00:34:24.969639   56540 api_server.go:279] https://192.168.50.8:8443/healthz returned 200:
	ok
	I1108 00:34:24.980018   56540 api_server.go:141] control plane version: v1.28.3
	I1108 00:34:24.980045   56540 api_server.go:131] duration metric: took 5.324481731s to wait for apiserver health ...
	I1108 00:34:24.980061   56540 cni.go:84] Creating CNI manager for ""
	I1108 00:34:24.980069   56540 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:34:24.982018   56540 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:34:24.983594   56540 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:34:25.008007   56540 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:34:25.069409   56540 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:34:25.100581   56540 system_pods.go:59] 8 kube-system pods found
	I1108 00:34:25.100625   56540 system_pods.go:61] "coredns-5dd5756b68-ll5wq" [e8399369-ce00-47cd-a19f-b0b557ec45c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:34:25.100638   56540 system_pods.go:61] "etcd-newest-cni-409933" [6e5355b3-8329-4681-8597-7f25839b25f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:34:25.100652   56540 system_pods.go:61] "kube-apiserver-newest-cni-409933" [dc4fc2d8-d008-4e2c-863a-ad811563dde1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:34:25.100666   56540 system_pods.go:61] "kube-controller-manager-newest-cni-409933" [e357943d-df33-461b-9019-cd74fa361f45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:34:25.100682   56540 system_pods.go:61] "kube-proxy-pvbfz" [46f0280b-e95e-464b-8285-b019041acff0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:34:25.100694   56540 system_pods.go:61] "kube-scheduler-newest-cni-409933" [735b37c0-751c-4c15-8263-c720464e1936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:34:25.100706   56540 system_pods.go:61] "metrics-server-57f55c9bc5-xldkl" [3bf406e9-e592-427e-84dc-a08396038f76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:34:25.100717   56540 system_pods.go:61] "storage-provisioner" [04ac5f4c-5252-4df8-a51c-9ef03d2fe9eb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:34:25.100733   56540 system_pods.go:74] duration metric: took 31.302005ms to wait for pod list to return data ...
	I1108 00:34:25.100745   56540 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:34:25.109100   56540 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:34:25.109142   56540 node_conditions.go:123] node cpu capacity is 2
	I1108 00:34:25.109155   56540 node_conditions.go:105] duration metric: took 8.401951ms to run NodePressure ...
	I1108 00:34:25.109174   56540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:34:25.491614   56540 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:34:25.515577   56540 ops.go:34] apiserver oom_adj: -16
	I1108 00:34:25.515603   56540 kubeadm.go:640] restartCluster took 20.033517441s
	I1108 00:34:25.515622   56540 kubeadm.go:406] StartCluster complete in 20.084237833s
	I1108 00:34:25.515637   56540 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:34:25.515697   56540 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:34:25.517558   56540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:34:25.517847   56540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:34:25.518067   56540 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:34:25.518151   56540 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-409933"
	I1108 00:34:25.518180   56540 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-409933"
	I1108 00:34:25.518184   56540 addons.go:69] Setting default-storageclass=true in profile "newest-cni-409933"
	I1108 00:34:25.518225   56540 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-409933"
	I1108 00:34:25.518190   56540 addons.go:69] Setting metrics-server=true in profile "newest-cni-409933"
	I1108 00:34:25.518302   56540 addons.go:231] Setting addon metrics-server=true in "newest-cni-409933"
	W1108 00:34:25.518313   56540 addons.go:240] addon metrics-server should already be in state true
	I1108 00:34:25.518329   56540 addons.go:69] Setting dashboard=true in profile "newest-cni-409933"
	I1108 00:34:25.518378   56540 addons.go:231] Setting addon dashboard=true in "newest-cni-409933"
	W1108 00:34:25.518397   56540 addons.go:240] addon dashboard should already be in state true
	I1108 00:34:25.518357   56540 host.go:66] Checking if "newest-cni-409933" exists ...
	I1108 00:34:25.518460   56540 host.go:66] Checking if "newest-cni-409933" exists ...
	I1108 00:34:25.518196   56540 config.go:182] Loaded profile config "newest-cni-409933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	W1108 00:34:25.518194   56540 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:34:25.518629   56540 host.go:66] Checking if "newest-cni-409933" exists ...
	I1108 00:34:25.518751   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:25.518798   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:25.518813   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:25.518834   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:25.518828   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:25.518866   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:25.519079   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:25.519164   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:25.523618   56540 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-409933" context rescaled to 1 replicas
	I1108 00:34:25.523649   56540 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.8 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:34:25.526298   56540 out.go:177] * Verifying Kubernetes components...
	I1108 00:34:25.527726   56540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:34:25.538353   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I1108 00:34:25.538489   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I1108 00:34:25.538514   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I1108 00:34:25.538586   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1108 00:34:25.538988   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:25.539041   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:25.539237   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:25.539627   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:25.539685   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:34:25.539701   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:25.539732   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:34:25.539747   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:25.539834   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:34:25.539854   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:25.540202   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:25.540210   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:25.540295   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:25.540403   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetState
	I1108 00:34:25.540545   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:34:25.540563   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:25.540940   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:25.540970   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:25.541006   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:25.541038   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:25.541485   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:25.542020   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:25.542058   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:25.544500   56540 addons.go:231] Setting addon default-storageclass=true in "newest-cni-409933"
	W1108 00:34:25.544523   56540 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:34:25.544550   56540 host.go:66] Checking if "newest-cni-409933" exists ...
	I1108 00:34:25.544994   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:25.545033   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:25.558239   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I1108 00:34:25.558899   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:25.559376   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:34:25.559393   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:25.559864   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:25.559931   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I1108 00:34:25.560335   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41019
	I1108 00:34:25.560547   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:25.560698   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetState
	I1108 00:34:25.560700   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:25.561189   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:34:25.561204   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:25.561321   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:34:25.561338   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:25.561684   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:25.561909   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetState
	I1108 00:34:25.562583   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I1108 00:34:25.562882   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:25.562948   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:34:25.563071   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:25.565327   56540 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 00:34:25.563196   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetState
	I1108 00:34:25.563522   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:34:25.563823   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:34:25.566862   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:25.568231   56540 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1108 00:34:25.569720   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 00:34:25.569734   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 00:34:25.569747   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:34:25.571490   56540 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:34:25.567762   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:25.568207   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:34:25.572951   56540 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:34:25.572964   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:34:25.572984   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:34:25.573038   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:34:25.574327   56540 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:34:25.575641   56540 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:34:25.575656   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:34:25.575670   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:34:25.573479   56540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:34:25.575741   56540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:34:25.574687   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:34:25.575976   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:34:25.574953   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:34:25.576260   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:34:25.576320   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:34:25.576503   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:34:25.576680   56540 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa Username:docker}
	I1108 00:34:25.577256   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:34:25.577315   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:34:25.577331   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:34:25.577508   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:34:25.577748   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:34:25.577854   56540 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa Username:docker}
	I1108 00:34:25.578773   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:34:25.579166   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:34:25.579195   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:34:25.579401   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:34:25.579544   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:34:25.579643   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:34:25.579760   56540 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa Username:docker}
	I1108 00:34:25.598438   56540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38337
	I1108 00:34:25.598837   56540 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:34:25.599350   56540 main.go:141] libmachine: Using API Version  1
	I1108 00:34:25.599369   56540 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:34:25.599795   56540 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:34:25.599992   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetState
	I1108 00:34:25.601534   56540 main.go:141] libmachine: (newest-cni-409933) Calling .DriverName
	I1108 00:34:25.601799   56540 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:34:25.601812   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:34:25.601828   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHHostname
	I1108 00:34:25.604352   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:34:25.604790   56540 main.go:141] libmachine: (newest-cni-409933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:8b:78", ip: ""} in network mk-newest-cni-409933: {Iface:virbr4 ExpiryTime:2023-11-08 01:33:48 +0000 UTC Type:0 Mac:52:54:00:46:8b:78 Iaid: IPaddr:192.168.50.8 Prefix:24 Hostname:newest-cni-409933 Clientid:01:52:54:00:46:8b:78}
	I1108 00:34:25.604917   56540 main.go:141] libmachine: (newest-cni-409933) DBG | domain newest-cni-409933 has defined IP address 192.168.50.8 and MAC address 52:54:00:46:8b:78 in network mk-newest-cni-409933
	I1108 00:34:25.605178   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHPort
	I1108 00:34:25.605396   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHKeyPath
	I1108 00:34:25.605556   56540 main.go:141] libmachine: (newest-cni-409933) Calling .GetSSHUsername
	I1108 00:34:25.605687   56540 sshutil.go:53] new ssh client: &{IP:192.168.50.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/newest-cni-409933/id_rsa Username:docker}
	I1108 00:34:25.737233   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 00:34:25.737260   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 00:34:25.764032   56540 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:34:25.764059   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:34:25.764525   56540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:34:25.802476   56540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:34:25.814908   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 00:34:25.814943   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 00:34:25.843816   56540 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:34:25.843856   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:34:25.851918   56540 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:34:25.851995   56540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:34:25.852174   56540 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1108 00:34:25.881310   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 00:34:25.881336   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 00:34:25.912486   56540 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:34:25.912511   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:34:25.927545   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 00:34:25.927575   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 00:34:25.954616   56540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:34:25.972727   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 00:34:25.972747   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 00:34:26.036548   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 00:34:26.036578   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 00:34:26.091468   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 00:34:26.091490   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 00:34:26.115940   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 00:34:26.115966   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 00:34:26.136201   56540 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 00:34:26.136224   56540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 00:34:26.157329   56540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 00:34:27.020094   56540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.255536753s)
	I1108 00:34:27.020154   56540 main.go:141] libmachine: Making call to close driver server
	I1108 00:34:27.020171   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Close
	I1108 00:34:27.020520   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Closing plugin on server side
	I1108 00:34:27.020585   56540 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:34:27.020604   56540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:34:27.020623   56540 main.go:141] libmachine: Making call to close driver server
	I1108 00:34:27.020637   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Close
	I1108 00:34:27.020885   56540 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:34:27.020901   56540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:34:27.028723   56540 main.go:141] libmachine: Making call to close driver server
	I1108 00:34:27.028744   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Close
	I1108 00:34:27.029064   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Closing plugin on server side
	I1108 00:34:27.029089   56540 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:34:27.029109   56540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:34:27.628002   56540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.825475473s)
	I1108 00:34:27.628058   56540 main.go:141] libmachine: Making call to close driver server
	I1108 00:34:27.628059   56540 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.776037155s)
	I1108 00:34:27.628074   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Close
	I1108 00:34:27.628084   56540 api_server.go:72] duration metric: took 2.104411668s to wait for apiserver process to appear ...
	I1108 00:34:27.628092   56540 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:34:27.628107   56540 api_server.go:253] Checking apiserver healthz at https://192.168.50.8:8443/healthz ...
	I1108 00:34:27.628196   56540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.6735489s)
	I1108 00:34:27.628237   56540 main.go:141] libmachine: Making call to close driver server
	I1108 00:34:27.628255   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Close
	I1108 00:34:27.628366   56540 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:34:27.628370   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Closing plugin on server side
	I1108 00:34:27.628382   56540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:34:27.628392   56540 main.go:141] libmachine: Making call to close driver server
	I1108 00:34:27.628401   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Close
	I1108 00:34:27.628541   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Closing plugin on server side
	I1108 00:34:27.628559   56540 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:34:27.628572   56540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:34:27.628581   56540 main.go:141] libmachine: Making call to close driver server
	I1108 00:34:27.628590   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Close
	I1108 00:34:27.628638   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Closing plugin on server side
	I1108 00:34:27.628670   56540 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:34:27.628680   56540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:34:27.628981   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Closing plugin on server side
	I1108 00:34:27.628983   56540 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:34:27.629005   56540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:34:27.629015   56540 addons.go:467] Verifying addon metrics-server=true in "newest-cni-409933"
	I1108 00:34:27.636597   56540 api_server.go:279] https://192.168.50.8:8443/healthz returned 200:
	ok
	I1108 00:34:27.637691   56540 api_server.go:141] control plane version: v1.28.3
	I1108 00:34:27.637712   56540 api_server.go:131] duration metric: took 9.613261ms to wait for apiserver health ...
	I1108 00:34:27.637721   56540 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:34:27.645826   56540 system_pods.go:59] 8 kube-system pods found
	I1108 00:34:27.645853   56540 system_pods.go:61] "coredns-5dd5756b68-ll5wq" [e8399369-ce00-47cd-a19f-b0b557ec45c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:34:27.645864   56540 system_pods.go:61] "etcd-newest-cni-409933" [6e5355b3-8329-4681-8597-7f25839b25f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:34:27.645878   56540 system_pods.go:61] "kube-apiserver-newest-cni-409933" [dc4fc2d8-d008-4e2c-863a-ad811563dde1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:34:27.645893   56540 system_pods.go:61] "kube-controller-manager-newest-cni-409933" [e357943d-df33-461b-9019-cd74fa361f45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:34:27.645906   56540 system_pods.go:61] "kube-proxy-pvbfz" [46f0280b-e95e-464b-8285-b019041acff0] Running
	I1108 00:34:27.645916   56540 system_pods.go:61] "kube-scheduler-newest-cni-409933" [735b37c0-751c-4c15-8263-c720464e1936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:34:27.645926   56540 system_pods.go:61] "metrics-server-57f55c9bc5-xldkl" [3bf406e9-e592-427e-84dc-a08396038f76] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:34:27.645938   56540 system_pods.go:61] "storage-provisioner" [04ac5f4c-5252-4df8-a51c-9ef03d2fe9eb] Running
	I1108 00:34:27.645948   56540 system_pods.go:74] duration metric: took 8.220094ms to wait for pod list to return data ...
	I1108 00:34:27.645960   56540 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:34:27.648793   56540 default_sa.go:45] found service account: "default"
	I1108 00:34:27.648810   56540 default_sa.go:55] duration metric: took 2.840446ms for default service account to be created ...
	I1108 00:34:27.648840   56540 kubeadm.go:581] duration metric: took 2.125168447s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1108 00:34:27.648862   56540 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:34:27.653439   56540 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:34:27.653468   56540 node_conditions.go:123] node cpu capacity is 2
	I1108 00:34:27.653480   56540 node_conditions.go:105] duration metric: took 4.60666ms to run NodePressure ...
	I1108 00:34:27.653493   56540 start.go:228] waiting for startup goroutines ...
	I1108 00:34:28.091106   56540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.93370723s)
	I1108 00:34:28.091168   56540 main.go:141] libmachine: Making call to close driver server
	I1108 00:34:28.091184   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Close
	I1108 00:34:28.091491   56540 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:34:28.091561   56540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:34:28.091575   56540 main.go:141] libmachine: Making call to close driver server
	I1108 00:34:28.091585   56540 main.go:141] libmachine: (newest-cni-409933) Calling .Close
	I1108 00:34:28.091522   56540 main.go:141] libmachine: (newest-cni-409933) DBG | Closing plugin on server side
	I1108 00:34:28.091838   56540 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:34:28.091852   56540 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:34:28.093638   56540 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-409933 addons enable metrics-server	
	
	
	I1108 00:34:28.095045   56540 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1108 00:34:28.096473   56540 addons.go:502] enable addons completed in 2.578414431s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1108 00:34:28.096509   56540 start.go:233] waiting for cluster config update ...
	I1108 00:34:28.096522   56540 start.go:242] writing updated cluster config ...
	I1108 00:34:28.096794   56540 ssh_runner.go:195] Run: rm -f paused
	I1108 00:34:28.147749   56540 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:34:28.149357   56540 out.go:177] * Done! kubectl is now configured to use "newest-cni-409933" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-08 00:12:52 UTC, ends at Wed 2023-11-08 00:34:29 UTC. --
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.393031148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403669393014655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=27fa65ca-e57d-4785-8785-c7abf12147fa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.394406805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d4cba0df-aa39-4003-938c-91c4f9a30471 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.394460632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d4cba0df-aa39-4003-938c-91c4f9a30471 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.394626337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729,PodSandboxId:2a22830dc4b11ebe174d391e51d48e317426101abae8af821ca364240146aa86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699402714755052561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdba396c-182a-4bef-8ccb-2275534d89c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef424d44,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a,PodSandboxId:8f9d54f627ac9cf4a6a158bd59974782c391c94abf0cbac4a88992ab90057fb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699402714532925842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6k8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b019bf-527c-4265-a67c-31e6cf377039,},Annotations:map[string]string{io.kubernetes.container.hash: 2cbb9000,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba,PodSandboxId:c8537f902f5b485b7f8dd3a7b90c5a4fda375f2774c608d7fe9fd206b97c01ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699402713370700547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vl7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6d5125-ebac-4931-9af7-045d1c4ba2b1,},Annotations:map[string]string{io.kubernetes.container.hash: e6be1849,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a,PodSandboxId:bfd143469feb56623caea7b93a30b284d3103b7754676c9795e8aece29b963ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699402690346299680,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8e9a6ea75c1f836169baf57b947fb963,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7,PodSandboxId:d42814db488e141657413d1b4ebe453ae8e872571e5ef6efff0f41641b0ae9d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699402689852347424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d317eb0edde6b082ddeb87a0edd3fd,},Annotations:map
[string]string{io.kubernetes.container.hash: 941977ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b,PodSandboxId:67f3c4ee09cc7d810051e7aed7a9e2d08ce87c234c06f01ae8e86c204fdb2070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699402689518686511,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0f26b2571b2956b1d2260c
a7e78ae,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe,PodSandboxId:b7eeef6985dd20728e93f2bffb2d5ee0d9bcc5bdf31acdf2b51f2dec48e4228e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699402689555811674,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8a3996624e70e2d7824097f608acdb,},A
nnotations:map[string]string{io.kubernetes.container.hash: 6d2b62dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d4cba0df-aa39-4003-938c-91c4f9a30471 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.442207088Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7ddb124e-165c-4e49-a105-b6e64f217b23 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.442373537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7ddb124e-165c-4e49-a105-b6e64f217b23 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.444009343Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8536503d-5755-49f3-b4be-64da193f41d3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.444398898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403669444384836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=8536503d-5755-49f3-b4be-64da193f41d3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.445472847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a931fe95-7a96-477d-91ff-d0d7844cd251 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.445550215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a931fe95-7a96-477d-91ff-d0d7844cd251 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.445868522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729,PodSandboxId:2a22830dc4b11ebe174d391e51d48e317426101abae8af821ca364240146aa86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699402714755052561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdba396c-182a-4bef-8ccb-2275534d89c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef424d44,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a,PodSandboxId:8f9d54f627ac9cf4a6a158bd59974782c391c94abf0cbac4a88992ab90057fb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699402714532925842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6k8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b019bf-527c-4265-a67c-31e6cf377039,},Annotations:map[string]string{io.kubernetes.container.hash: 2cbb9000,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba,PodSandboxId:c8537f902f5b485b7f8dd3a7b90c5a4fda375f2774c608d7fe9fd206b97c01ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699402713370700547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vl7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6d5125-ebac-4931-9af7-045d1c4ba2b1,},Annotations:map[string]string{io.kubernetes.container.hash: e6be1849,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a,PodSandboxId:bfd143469feb56623caea7b93a30b284d3103b7754676c9795e8aece29b963ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699402690346299680,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8e9a6ea75c1f836169baf57b947fb963,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7,PodSandboxId:d42814db488e141657413d1b4ebe453ae8e872571e5ef6efff0f41641b0ae9d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699402689852347424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d317eb0edde6b082ddeb87a0edd3fd,},Annotations:map
[string]string{io.kubernetes.container.hash: 941977ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b,PodSandboxId:67f3c4ee09cc7d810051e7aed7a9e2d08ce87c234c06f01ae8e86c204fdb2070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699402689518686511,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0f26b2571b2956b1d2260c
a7e78ae,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe,PodSandboxId:b7eeef6985dd20728e93f2bffb2d5ee0d9bcc5bdf31acdf2b51f2dec48e4228e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699402689555811674,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8a3996624e70e2d7824097f608acdb,},A
nnotations:map[string]string{io.kubernetes.container.hash: 6d2b62dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a931fe95-7a96-477d-91ff-d0d7844cd251 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.491188808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bd3f12bf-ff69-495b-a530-28136004210c name=/runtime.v1.RuntimeService/Version
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.491276599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bd3f12bf-ff69-495b-a530-28136004210c name=/runtime.v1.RuntimeService/Version
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.492535177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d626ee6f-dc08-4ad9-b3d7-336c4c1f07df name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.492845573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403669492833360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d626ee6f-dc08-4ad9-b3d7-336c4c1f07df name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.493713479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f2a5a953-cbc3-4df8-ac71-7ab64b915198 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.493759373Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f2a5a953-cbc3-4df8-ac71-7ab64b915198 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.493940577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729,PodSandboxId:2a22830dc4b11ebe174d391e51d48e317426101abae8af821ca364240146aa86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699402714755052561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdba396c-182a-4bef-8ccb-2275534d89c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef424d44,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a,PodSandboxId:8f9d54f627ac9cf4a6a158bd59974782c391c94abf0cbac4a88992ab90057fb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699402714532925842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6k8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b019bf-527c-4265-a67c-31e6cf377039,},Annotations:map[string]string{io.kubernetes.container.hash: 2cbb9000,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba,PodSandboxId:c8537f902f5b485b7f8dd3a7b90c5a4fda375f2774c608d7fe9fd206b97c01ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699402713370700547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vl7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6d5125-ebac-4931-9af7-045d1c4ba2b1,},Annotations:map[string]string{io.kubernetes.container.hash: e6be1849,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a,PodSandboxId:bfd143469feb56623caea7b93a30b284d3103b7754676c9795e8aece29b963ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699402690346299680,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8e9a6ea75c1f836169baf57b947fb963,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7,PodSandboxId:d42814db488e141657413d1b4ebe453ae8e872571e5ef6efff0f41641b0ae9d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699402689852347424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d317eb0edde6b082ddeb87a0edd3fd,},Annotations:map
[string]string{io.kubernetes.container.hash: 941977ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b,PodSandboxId:67f3c4ee09cc7d810051e7aed7a9e2d08ce87c234c06f01ae8e86c204fdb2070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699402689518686511,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0f26b2571b2956b1d2260c
a7e78ae,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe,PodSandboxId:b7eeef6985dd20728e93f2bffb2d5ee0d9bcc5bdf31acdf2b51f2dec48e4228e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699402689555811674,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8a3996624e70e2d7824097f608acdb,},A
nnotations:map[string]string{io.kubernetes.container.hash: 6d2b62dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f2a5a953-cbc3-4df8-ac71-7ab64b915198 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.532644183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fdf905cc-3307-4e92-8e88-226acfa4a3fb name=/runtime.v1.RuntimeService/Version
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.532724255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fdf905cc-3307-4e92-8e88-226acfa4a3fb name=/runtime.v1.RuntimeService/Version
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.534239527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cea254fe-a6ba-4da1-91c9-d0654f6cfc24 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.534621828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403669534608729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=cea254fe-a6ba-4da1-91c9-d0654f6cfc24 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.535434933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=56282dc8-ae78-4cb6-b5ca-63a1a251aa40 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.535479289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=56282dc8-ae78-4cb6-b5ca-63a1a251aa40 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:34:29 no-preload-320390 crio[713]: time="2023-11-08 00:34:29.535634677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729,PodSandboxId:2a22830dc4b11ebe174d391e51d48e317426101abae8af821ca364240146aa86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1699402714755052561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdba396c-182a-4bef-8ccb-2275534d89c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef424d44,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a,PodSandboxId:8f9d54f627ac9cf4a6a158bd59974782c391c94abf0cbac4a88992ab90057fb8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:c27b501aff0bdcf8e01a6878c04bb3c561393d541d59bbcf78899e526f75865c,State:CONTAINER_RUNNING,CreatedAt:1699402714532925842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6k8g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b019bf-527c-4265-a67c-31e6cf377039,},Annotations:map[string]string{io.kubernetes.container.hash: 2cbb9000,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba,PodSandboxId:c8537f902f5b485b7f8dd3a7b90c5a4fda375f2774c608d7fe9fd206b97c01ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1699402713370700547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vl7nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c6d5125-ebac-4931-9af7-045d1c4ba2b1,},Annotations:map[string]string{io.kubernetes.container.hash: e6be1849,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a,PodSandboxId:bfd143469feb56623caea7b93a30b284d3103b7754676c9795e8aece29b963ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:0a0f87945559d9b6b3f2fa902622af79f71a98a35be9eb324615e61e0cd71125,State:CONTAINER_RUNNING,CreatedAt:1699402690346299680,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8e9a6ea75c1f836169baf57b947fb963,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7,PodSandboxId:d42814db488e141657413d1b4ebe453ae8e872571e5ef6efff0f41641b0ae9d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1699402689852347424,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d317eb0edde6b082ddeb87a0edd3fd,},Annotations:map
[string]string{io.kubernetes.container.hash: 941977ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b,PodSandboxId:67f3c4ee09cc7d810051e7aed7a9e2d08ce87c234c06f01ae8e86c204fdb2070,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:38c5f1209823bc435d4ab1bc25e1a1eacbb8ae9eb7266b1f1137d5b22b847e53,State:CONTAINER_RUNNING,CreatedAt:1699402689518686511,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0f26b2571b2956b1d2260c
a7e78ae,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe,PodSandboxId:b7eeef6985dd20728e93f2bffb2d5ee0d9bcc5bdf31acdf2b51f2dec48e4228e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1f230854322f1f6224d356f4d42417f2ef0c863ffe7afa0cc0c1eb2ed9a4d3c8,State:CONTAINER_RUNNING,CreatedAt:1699402689555811674,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320390,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8a3996624e70e2d7824097f608acdb,},A
nnotations:map[string]string{io.kubernetes.container.hash: 6d2b62dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=56282dc8-ae78-4cb6-b5ca-63a1a251aa40 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	89294275812d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   2a22830dc4b11       storage-provisioner
	c34465a005584       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   15 minutes ago      Running             kube-proxy                0                   8f9d54f627ac9       kube-proxy-m6k8g
	52ea18eeebb99       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   c8537f902f5b4       coredns-5dd5756b68-vl7nr
	a2b9790aba3f6       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   16 minutes ago      Running             kube-scheduler            2                   bfd143469feb5       kube-scheduler-no-preload-320390
	d47be6e9b0407       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   d42814db488e1       etcd-no-preload-320390
	7d181d8164e69       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   16 minutes ago      Running             kube-apiserver            2                   b7eeef6985dd2       kube-apiserver-no-preload-320390
	3b1c3ebbbf66c       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   16 minutes ago      Running             kube-controller-manager   2                   67f3c4ee09cc7       kube-controller-manager-no-preload-320390
	
	* 
	* ==> coredns [52ea18eeebb997e1c420490aaca5e3210cb999e8634e44fc18955bf19502a0ba] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37451 - 60939 "HINFO IN 6423122248177977238.1283848085502843503. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020976607s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-320390
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-320390
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=no-preload-320390
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T00_18_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 00:18:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-320390
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 00:34:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:33:59 +0000   Wed, 08 Nov 2023 00:18:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:33:59 +0000   Wed, 08 Nov 2023 00:18:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:33:59 +0000   Wed, 08 Nov 2023 00:18:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:33:59 +0000   Wed, 08 Nov 2023 00:18:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.176
	  Hostname:    no-preload-320390
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c178cf512c54d4fb9fcd7bd180751f4
	  System UUID:                8c178cf5-12c5-4d4f-b9fc-d7bd180751f4
	  Boot ID:                    8f17c187-089a-41df-a272-f9c7d1be0d14
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-vl7nr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-320390                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-no-preload-320390             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-no-preload-320390    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-m6k8g                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-320390             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-n49bz              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node no-preload-320390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node no-preload-320390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node no-preload-320390 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node no-preload-320390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node no-preload-320390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node no-preload-320390 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m                kubelet          Node no-preload-320390 status is now: NodeNotReady
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m                kubelet          Node no-preload-320390 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node no-preload-320390 event: Registered Node no-preload-320390 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 8 00:12] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068367] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.325854] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.497505] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141095] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.439782] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.474729] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.110954] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.147908] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.096650] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.238061] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Nov 8 00:13] systemd-fstab-generator[1270]: Ignoring "noauto" for root device
	[ +19.445945] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 8 00:18] systemd-fstab-generator[3913]: Ignoring "noauto" for root device
	[  +9.797379] systemd-fstab-generator[4238]: Ignoring "noauto" for root device
	[ +13.468382] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.153644] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [d47be6e9b0407212873db2905fdaae6db1089403681c1a53e30f2bc8f15aafb7] <==
	* {"level":"info","ts":"2023-11-08T00:18:11.573238Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:11.577399Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4f4f572eb29375a","local-member-attributes":"{Name:no-preload-320390 ClientURLs:[https://192.168.61.176:2379]}","request-path":"/0/members/4f4f572eb29375a/attributes","cluster-id":"310df9cc729b3e75","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T00:18:11.577468Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:11.584762Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T00:18:11.593203Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"310df9cc729b3e75","local-member-id":"4f4f572eb29375a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:11.593382Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:11.593424Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:11.59344Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T00:18:11.600564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.176:2379"}
	{"level":"info","ts":"2023-11-08T00:18:11.603304Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:11.603454Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T00:28:12.091715Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":721}
	{"level":"info","ts":"2023-11-08T00:28:12.094798Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":721,"took":"2.688447ms","hash":640910926}
	{"level":"info","ts":"2023-11-08T00:28:12.094905Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":640910926,"revision":721,"compact-revision":-1}
	{"level":"info","ts":"2023-11-08T00:32:56.397405Z","caller":"traceutil/trace.go:171","msg":"trace[1559989035] linearizableReadLoop","detail":"{readStateIndex:1387; appliedIndex:1386; }","duration":"205.97047ms","start":"2023-11-08T00:32:56.191395Z","end":"2023-11-08T00:32:56.397365Z","steps":["trace[1559989035] 'read index received'  (duration: 205.696366ms)","trace[1559989035] 'applied index is now lower than readState.Index'  (duration: 273.565µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-08T00:32:56.39775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.260982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.176\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2023-11-08T00:32:56.397814Z","caller":"traceutil/trace.go:171","msg":"trace[1798023601] range","detail":"{range_begin:/registry/masterleases/192.168.61.176; range_end:; response_count:1; response_revision:1195; }","duration":"206.453643ms","start":"2023-11-08T00:32:56.191347Z","end":"2023-11-08T00:32:56.3978Z","steps":["trace[1798023601] 'agreement among raft nodes before linearized reading'  (duration: 206.212433ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:32:56.398024Z","caller":"traceutil/trace.go:171","msg":"trace[1574480388] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"234.890403ms","start":"2023-11-08T00:32:56.163107Z","end":"2023-11-08T00:32:56.397998Z","steps":["trace[1574480388] 'process raft request'  (duration: 234.131516ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T00:32:56.602264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.115013ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3988653992102279143 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:375a8bac4b3337e6>","response":"size:39"}
	{"level":"info","ts":"2023-11-08T00:32:56.602368Z","caller":"traceutil/trace.go:171","msg":"trace[1489502748] linearizableReadLoop","detail":"{readStateIndex:1388; appliedIndex:1387; }","duration":"131.134501ms","start":"2023-11-08T00:32:56.471222Z","end":"2023-11-08T00:32:56.602356Z","steps":["trace[1489502748] 'read index received'  (duration: 29.626µs)","trace[1489502748] 'applied index is now lower than readState.Index'  (duration: 131.103354ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-08T00:32:56.602451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.322753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-08T00:32:56.602472Z","caller":"traceutil/trace.go:171","msg":"trace[1402311158] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1195; }","duration":"131.355244ms","start":"2023-11-08T00:32:56.471108Z","end":"2023-11-08T00:32:56.602463Z","steps":["trace[1402311158] 'agreement among raft nodes before linearized reading'  (duration: 131.289717ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:33:12.100226Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2023-11-08T00:33:12.10205Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":964,"took":"1.390592ms","hash":4200915535}
	{"level":"info","ts":"2023-11-08T00:33:12.102197Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4200915535,"revision":964,"compact-revision":721}
	
	* 
	* ==> kernel <==
	*  00:34:29 up 21 min,  0 users,  load average: 0.26, 0.22, 0.26
	Linux no-preload-320390 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7d181d8164e69f83813d7d59131e829c75cadfbd00f3e97edae5b82b47acddbe] <==
	* W1108 00:31:15.377626       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:31:15.377831       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:31:15.377872       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:32:14.223314       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1108 00:33:14.222620       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:33:14.379747       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:33:14.379945       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:33:14.380480       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:33:15.380423       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:33:15.380662       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:33:15.380787       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:33:15.380563       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:33:15.380921       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:33:15.382850       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:34:14.223231       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:34:15.381727       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:34:15.381796       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:34:15.381804       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:34:15.384115       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:34:15.384327       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:34:15.384391       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3b1c3ebbbf66c509c1ccf4591f3ebb7e8269c7d2aa74f294406eac958d98bc4b] <==
	* E1108 00:29:00.090034       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:29:00.607415       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:29:27.868918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="358.434µs"
	E1108 00:29:30.097919       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:29:30.616916       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:29:38.862903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="157.456µs"
	E1108 00:30:00.104347       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:30:00.627296       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:30:30.111068       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:30:30.643338       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:31:00.117703       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:31:00.652499       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:31:30.124719       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:31:30.661775       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:32:00.131304       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:32:00.672228       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:32:30.138986       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:32:30.683095       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:33:00.146519       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:33:00.695023       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:33:30.152802       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:33:30.705367       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:34:00.162215       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:34:00.717068       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:34:27.883360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="522.36µs"
	
	* 
	* ==> kube-proxy [c34465a005584f8717eff45c810e58337f80bc5f87ede098533bcc716cc6b82a] <==
	* I1108 00:18:34.883939       1 server_others.go:69] "Using iptables proxy"
	I1108 00:18:34.926037       1 node.go:141] Successfully retrieved node IP: 192.168.61.176
	I1108 00:18:35.028087       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 00:18:35.028239       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 00:18:35.082455       1 server_others.go:152] "Using iptables Proxier"
	I1108 00:18:35.082582       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 00:18:35.083540       1 server.go:846] "Version info" version="v1.28.3"
	I1108 00:18:35.083558       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:18:35.091711       1 config.go:188] "Starting service config controller"
	I1108 00:18:35.091975       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 00:18:35.093034       1 config.go:315] "Starting node config controller"
	I1108 00:18:35.096778       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 00:18:35.095676       1 config.go:97] "Starting endpoint slice config controller"
	I1108 00:18:35.096863       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 00:18:35.196909       1 shared_informer.go:318] Caches are synced for node config
	I1108 00:18:35.197005       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 00:18:35.197004       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [a2b9790aba3f68a303cae1dfd0380a20f5abc6d0ca158a81cc13cf50ee09bb4a] <==
	* W1108 00:18:14.400492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:18:14.400557       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 00:18:15.226880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 00:18:15.227001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 00:18:15.268087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 00:18:15.268290       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1108 00:18:15.306464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:18:15.306552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 00:18:15.346269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1108 00:18:15.346346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1108 00:18:15.483476       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:15.483541       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 00:18:15.515945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 00:18:15.516023       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1108 00:18:15.560036       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:18:15.560088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 00:18:15.580428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 00:18:15.580547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 00:18:15.603384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:15.603463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:15.618618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 00:18:15.618708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1108 00:18:15.645761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:18:15.645831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1108 00:18:18.177253       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 00:12:52 UTC, ends at Wed 2023-11-08 00:34:30 UTC. --
	Nov 08 00:32:17 no-preload-320390 kubelet[4245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:32:17 no-preload-320390 kubelet[4245]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:32:20 no-preload-320390 kubelet[4245]: E1108 00:32:20.846759    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:32:32 no-preload-320390 kubelet[4245]: E1108 00:32:32.847378    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:32:44 no-preload-320390 kubelet[4245]: E1108 00:32:44.846981    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:32:58 no-preload-320390 kubelet[4245]: E1108 00:32:58.847402    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:33:12 no-preload-320390 kubelet[4245]: E1108 00:33:12.846837    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:33:17 no-preload-320390 kubelet[4245]: E1108 00:33:17.971525    4245 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:33:17 no-preload-320390 kubelet[4245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:33:17 no-preload-320390 kubelet[4245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:33:17 no-preload-320390 kubelet[4245]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:33:18 no-preload-320390 kubelet[4245]: E1108 00:33:18.086496    4245 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Nov 08 00:33:23 no-preload-320390 kubelet[4245]: E1108 00:33:23.847894    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:33:37 no-preload-320390 kubelet[4245]: E1108 00:33:37.848577    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:33:52 no-preload-320390 kubelet[4245]: E1108 00:33:52.846260    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:34:03 no-preload-320390 kubelet[4245]: E1108 00:34:03.848636    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:34:14 no-preload-320390 kubelet[4245]: E1108 00:34:14.870901    4245 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 08 00:34:14 no-preload-320390 kubelet[4245]: E1108 00:34:14.870960    4245 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 08 00:34:14 no-preload-320390 kubelet[4245]: E1108 00:34:14.871272    4245 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g7bl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-n49bz_kube-system(26c5310d-c29f-476a-a520-bd693143e248): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 08 00:34:14 no-preload-320390 kubelet[4245]: E1108 00:34:14.871320    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	Nov 08 00:34:17 no-preload-320390 kubelet[4245]: E1108 00:34:17.971906    4245 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:34:17 no-preload-320390 kubelet[4245]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:34:17 no-preload-320390 kubelet[4245]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:34:17 no-preload-320390 kubelet[4245]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:34:27 no-preload-320390 kubelet[4245]: E1108 00:34:27.853998    4245 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n49bz" podUID="26c5310d-c29f-476a-a520-bd693143e248"
	
	* 
	* ==> storage-provisioner [89294275812d549eab8ce0cdac2ded45c29910a232ba43955c5fc671f9456729] <==
	* I1108 00:18:34.925963       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 00:18:34.942945       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 00:18:34.943205       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 00:18:34.960405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 00:18:34.960844       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-320390_84f16619-62b7-4b7d-8ec7-b67f9c365c96!
	I1108 00:18:34.964859       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46c1dbd2-a970-4526-bfb8-47404fe8eb3a", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-320390_84f16619-62b7-4b7d-8ec7-b67f9c365c96 became leader
	I1108 00:18:35.062941       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-320390_84f16619-62b7-4b7d-8ec7-b67f9c365c96!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-320390 -n no-preload-320390
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-320390 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-n49bz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-320390 describe pod metrics-server-57f55c9bc5-n49bz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-320390 describe pod metrics-server-57f55c9bc5-n49bz: exit status 1 (72.083149ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-n49bz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-320390 describe pod metrics-server-57f55c9bc5-n49bz: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (400.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1108 00:28:53.871508   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-08 00:37:09.204441366 +0000 UTC m=+5763.367750251
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-039263 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-039263 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (59.830548ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-039263 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-039263 logs -n 25
E1108 00:37:10.104130   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-039263 logs -n 25: (1.356023532s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo cat                           | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo cat                           | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo cat                           | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo docker                        | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo cat                           | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo cat                           | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo cat                           | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo cat                           | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo                               | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo find                          | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-010870 sudo crio                          | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-010870                                    | kindnet-010870            | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC | 08 Nov 23 00:36 UTC |
	| start   | -p enable-default-cni-010870                         | enable-default-cni-010870 | jenkins | v1.32.0 | 08 Nov 23 00:36 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:36:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:36:51.683355   61493 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:36:51.683501   61493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:36:51.683509   61493 out.go:309] Setting ErrFile to fd 2...
	I1108 00:36:51.683514   61493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:36:51.683784   61493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:36:51.684404   61493 out.go:303] Setting JSON to false
	I1108 00:36:51.685596   61493 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8361,"bootTime":1699395451,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:36:51.685660   61493 start.go:138] virtualization: kvm guest
	I1108 00:36:51.687687   61493 out.go:177] * [enable-default-cni-010870] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:36:51.689555   61493 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:36:51.689577   61493 notify.go:220] Checking for updates...
	I1108 00:36:51.691214   61493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:36:51.693084   61493 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:36:51.694993   61493 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:36:51.696462   61493 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:36:51.697956   61493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:36:51.700190   61493 config.go:182] Loaded profile config "calico-010870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:36:51.700347   61493 config.go:182] Loaded profile config "custom-flannel-010870": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:36:51.700512   61493 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:36:51.700620   61493 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:36:51.750462   61493 out.go:177] * Using the kvm2 driver based on user configuration
	I1108 00:36:51.751915   61493 start.go:298] selected driver: kvm2
	I1108 00:36:51.751944   61493 start.go:902] validating driver "kvm2" against <nil>
	I1108 00:36:51.751957   61493 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:36:51.753040   61493 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:36:51.753144   61493 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:36:51.768755   61493 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:36:51.768805   61493 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	E1108 00:36:51.769005   61493 start_flags.go:465] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1108 00:36:51.769030   61493 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 00:36:51.769082   61493 cni.go:84] Creating CNI manager for "bridge"
	I1108 00:36:51.769095   61493 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1108 00:36:51.769104   61493 start_flags.go:323] config:
	{Name:enable-default-cni-010870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:enable-default-cni-010870 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:36:51.769222   61493 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:36:51.771057   61493 out.go:177] * Starting control plane node enable-default-cni-010870 in cluster enable-default-cni-010870
	I1108 00:36:51.772569   61493 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:36:51.772598   61493 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1108 00:36:51.772604   61493 cache.go:56] Caching tarball of preloaded images
	I1108 00:36:51.772675   61493 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 00:36:51.772684   61493 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1108 00:36:51.772770   61493 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/enable-default-cni-010870/config.json ...
	I1108 00:36:51.772790   61493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/enable-default-cni-010870/config.json: {Name:mk050ffe15454cccafc705c5021782707eae884d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:36:51.772930   61493 start.go:365] acquiring machines lock for enable-default-cni-010870: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:36:51.772965   61493 start.go:369] acquired machines lock for "enable-default-cni-010870" in 15.408µs
	I1108 00:36:51.772980   61493 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-010870 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:en
able-default-cni-010870 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:36:51.773032   61493 start.go:125] createHost starting for "" (driver="kvm2")
	I1108 00:36:51.827488   58234 node_ready.go:58] node "calico-010870" has status "Ready":"False"
	I1108 00:36:54.327122   58234 node_ready.go:58] node "calico-010870" has status "Ready":"False"
	I1108 00:36:51.302448   59816 out.go:204]   - Booting up control plane ...
	I1108 00:36:51.302600   59816 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:36:51.302702   59816 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:36:51.302788   59816 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:36:51.326239   59816 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:36:51.326526   59816 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:36:51.326744   59816 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:36:51.474165   59816 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:36:51.774765   61493 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1108 00:36:51.774915   61493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:36:51.774955   61493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:36:51.794171   61493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I1108 00:36:51.794584   61493 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:36:51.795254   61493 main.go:141] libmachine: Using API Version  1
	I1108 00:36:51.795305   61493 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:36:51.795668   61493 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:36:51.795870   61493 main.go:141] libmachine: (enable-default-cni-010870) Calling .GetMachineName
	I1108 00:36:51.796068   61493 main.go:141] libmachine: (enable-default-cni-010870) Calling .DriverName
	I1108 00:36:51.796233   61493 start.go:159] libmachine.API.Create for "enable-default-cni-010870" (driver="kvm2")
	I1108 00:36:51.796271   61493 client.go:168] LocalClient.Create starting
	I1108 00:36:51.796313   61493 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem
	I1108 00:36:51.796354   61493 main.go:141] libmachine: Decoding PEM data...
	I1108 00:36:51.796378   61493 main.go:141] libmachine: Parsing certificate...
	I1108 00:36:51.796445   61493 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem
	I1108 00:36:51.796469   61493 main.go:141] libmachine: Decoding PEM data...
	I1108 00:36:51.796486   61493 main.go:141] libmachine: Parsing certificate...
	I1108 00:36:51.796510   61493 main.go:141] libmachine: Running pre-create checks...
	I1108 00:36:51.796527   61493 main.go:141] libmachine: (enable-default-cni-010870) Calling .PreCreateCheck
	I1108 00:36:51.796980   61493 main.go:141] libmachine: (enable-default-cni-010870) Calling .GetConfigRaw
	I1108 00:36:51.797391   61493 main.go:141] libmachine: Creating machine...
	I1108 00:36:51.797410   61493 main.go:141] libmachine: (enable-default-cni-010870) Calling .Create
	I1108 00:36:51.797551   61493 main.go:141] libmachine: (enable-default-cni-010870) Creating KVM machine...
	I1108 00:36:51.798926   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | found existing default KVM network
	I1108 00:36:51.800311   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:51.800137   61516 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3c:c7:48} reservation:<nil>}
	I1108 00:36:51.801327   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:51.801251   61516 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0c:54:e8} reservation:<nil>}
	I1108 00:36:51.802550   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:51.802440   61516 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027f2b0}
	I1108 00:36:51.807558   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | trying to create private KVM network mk-enable-default-cni-010870 192.168.61.0/24...
	I1108 00:36:51.900751   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | private KVM network mk-enable-default-cni-010870 192.168.61.0/24 created
	I1108 00:36:51.900785   61493 main.go:141] libmachine: (enable-default-cni-010870) Setting up store path in /home/jenkins/minikube-integration/17585-9647/.minikube/machines/enable-default-cni-010870 ...
	I1108 00:36:51.900802   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:51.900721   61516 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:36:51.900842   61493 main.go:141] libmachine: (enable-default-cni-010870) Building disk image from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1108 00:36:51.900927   61493 main.go:141] libmachine: (enable-default-cni-010870) Downloading /home/jenkins/minikube-integration/17585-9647/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1108 00:36:52.175888   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:52.175755   61516 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/enable-default-cni-010870/id_rsa...
	I1108 00:36:52.374664   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:52.374484   61516 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/enable-default-cni-010870/enable-default-cni-010870.rawdisk...
	I1108 00:36:52.374714   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Writing magic tar header
	I1108 00:36:52.374740   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Writing SSH key tar header
	I1108 00:36:52.374756   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:52.374593   61516 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/enable-default-cni-010870 ...
	I1108 00:36:52.374784   61493 main.go:141] libmachine: (enable-default-cni-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines/enable-default-cni-010870 (perms=drwx------)
	I1108 00:36:52.374807   61493 main.go:141] libmachine: (enable-default-cni-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube/machines (perms=drwxr-xr-x)
	I1108 00:36:52.374832   61493 main.go:141] libmachine: (enable-default-cni-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647/.minikube (perms=drwxr-xr-x)
	I1108 00:36:52.374847   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/enable-default-cni-010870
	I1108 00:36:52.374864   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube/machines
	I1108 00:36:52.374880   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:36:52.374896   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17585-9647
	I1108 00:36:52.374912   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1108 00:36:52.374934   61493 main.go:141] libmachine: (enable-default-cni-010870) Setting executable bit set on /home/jenkins/minikube-integration/17585-9647 (perms=drwxrwxr-x)
	I1108 00:36:52.374945   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Checking permissions on dir: /home/jenkins
	I1108 00:36:52.374956   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Checking permissions on dir: /home
	I1108 00:36:52.374969   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | Skipping /home - not owner
	I1108 00:36:52.374992   61493 main.go:141] libmachine: (enable-default-cni-010870) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1108 00:36:52.375015   61493 main.go:141] libmachine: (enable-default-cni-010870) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1108 00:36:52.375029   61493 main.go:141] libmachine: (enable-default-cni-010870) Creating domain...
	I1108 00:36:52.375912   61493 main.go:141] libmachine: (enable-default-cni-010870) define libvirt domain using xml: 
	I1108 00:36:52.375933   61493 main.go:141] libmachine: (enable-default-cni-010870) <domain type='kvm'>
	I1108 00:36:52.375955   61493 main.go:141] libmachine: (enable-default-cni-010870)   <name>enable-default-cni-010870</name>
	I1108 00:36:52.375970   61493 main.go:141] libmachine: (enable-default-cni-010870)   <memory unit='MiB'>3072</memory>
	I1108 00:36:52.375986   61493 main.go:141] libmachine: (enable-default-cni-010870)   <vcpu>2</vcpu>
	I1108 00:36:52.375997   61493 main.go:141] libmachine: (enable-default-cni-010870)   <features>
	I1108 00:36:52.376008   61493 main.go:141] libmachine: (enable-default-cni-010870)     <acpi/>
	I1108 00:36:52.376024   61493 main.go:141] libmachine: (enable-default-cni-010870)     <apic/>
	I1108 00:36:52.376035   61493 main.go:141] libmachine: (enable-default-cni-010870)     <pae/>
	I1108 00:36:52.376046   61493 main.go:141] libmachine: (enable-default-cni-010870)     
	I1108 00:36:52.376059   61493 main.go:141] libmachine: (enable-default-cni-010870)   </features>
	I1108 00:36:52.376071   61493 main.go:141] libmachine: (enable-default-cni-010870)   <cpu mode='host-passthrough'>
	I1108 00:36:52.376084   61493 main.go:141] libmachine: (enable-default-cni-010870)   
	I1108 00:36:52.376097   61493 main.go:141] libmachine: (enable-default-cni-010870)   </cpu>
	I1108 00:36:52.376115   61493 main.go:141] libmachine: (enable-default-cni-010870)   <os>
	I1108 00:36:52.376128   61493 main.go:141] libmachine: (enable-default-cni-010870)     <type>hvm</type>
	I1108 00:36:52.376140   61493 main.go:141] libmachine: (enable-default-cni-010870)     <boot dev='cdrom'/>
	I1108 00:36:52.376153   61493 main.go:141] libmachine: (enable-default-cni-010870)     <boot dev='hd'/>
	I1108 00:36:52.376164   61493 main.go:141] libmachine: (enable-default-cni-010870)     <bootmenu enable='no'/>
	I1108 00:36:52.376183   61493 main.go:141] libmachine: (enable-default-cni-010870)   </os>
	I1108 00:36:52.376206   61493 main.go:141] libmachine: (enable-default-cni-010870)   <devices>
	I1108 00:36:52.376223   61493 main.go:141] libmachine: (enable-default-cni-010870)     <disk type='file' device='cdrom'>
	I1108 00:36:52.376240   61493 main.go:141] libmachine: (enable-default-cni-010870)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/enable-default-cni-010870/boot2docker.iso'/>
	I1108 00:36:52.376258   61493 main.go:141] libmachine: (enable-default-cni-010870)       <target dev='hdc' bus='scsi'/>
	I1108 00:36:52.376270   61493 main.go:141] libmachine: (enable-default-cni-010870)       <readonly/>
	I1108 00:36:52.376280   61493 main.go:141] libmachine: (enable-default-cni-010870)     </disk>
	I1108 00:36:52.376294   61493 main.go:141] libmachine: (enable-default-cni-010870)     <disk type='file' device='disk'>
	I1108 00:36:52.376309   61493 main.go:141] libmachine: (enable-default-cni-010870)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1108 00:36:52.376329   61493 main.go:141] libmachine: (enable-default-cni-010870)       <source file='/home/jenkins/minikube-integration/17585-9647/.minikube/machines/enable-default-cni-010870/enable-default-cni-010870.rawdisk'/>
	I1108 00:36:52.376342   61493 main.go:141] libmachine: (enable-default-cni-010870)       <target dev='hda' bus='virtio'/>
	I1108 00:36:52.376354   61493 main.go:141] libmachine: (enable-default-cni-010870)     </disk>
	I1108 00:36:52.376368   61493 main.go:141] libmachine: (enable-default-cni-010870)     <interface type='network'>
	I1108 00:36:52.376383   61493 main.go:141] libmachine: (enable-default-cni-010870)       <source network='mk-enable-default-cni-010870'/>
	I1108 00:36:52.376401   61493 main.go:141] libmachine: (enable-default-cni-010870)       <model type='virtio'/>
	I1108 00:36:52.376415   61493 main.go:141] libmachine: (enable-default-cni-010870)     </interface>
	I1108 00:36:52.376428   61493 main.go:141] libmachine: (enable-default-cni-010870)     <interface type='network'>
	I1108 00:36:52.376467   61493 main.go:141] libmachine: (enable-default-cni-010870)       <source network='default'/>
	I1108 00:36:52.376493   61493 main.go:141] libmachine: (enable-default-cni-010870)       <model type='virtio'/>
	I1108 00:36:52.376505   61493 main.go:141] libmachine: (enable-default-cni-010870)     </interface>
	I1108 00:36:52.376515   61493 main.go:141] libmachine: (enable-default-cni-010870)     <serial type='pty'>
	I1108 00:36:52.376536   61493 main.go:141] libmachine: (enable-default-cni-010870)       <target port='0'/>
	I1108 00:36:52.376545   61493 main.go:141] libmachine: (enable-default-cni-010870)     </serial>
	I1108 00:36:52.376554   61493 main.go:141] libmachine: (enable-default-cni-010870)     <console type='pty'>
	I1108 00:36:52.376563   61493 main.go:141] libmachine: (enable-default-cni-010870)       <target type='serial' port='0'/>
	I1108 00:36:52.376573   61493 main.go:141] libmachine: (enable-default-cni-010870)     </console>
	I1108 00:36:52.376582   61493 main.go:141] libmachine: (enable-default-cni-010870)     <rng model='virtio'>
	I1108 00:36:52.376601   61493 main.go:141] libmachine: (enable-default-cni-010870)       <backend model='random'>/dev/random</backend>
	I1108 00:36:52.376629   61493 main.go:141] libmachine: (enable-default-cni-010870)     </rng>
	I1108 00:36:52.376644   61493 main.go:141] libmachine: (enable-default-cni-010870)     
	I1108 00:36:52.376657   61493 main.go:141] libmachine: (enable-default-cni-010870)     
	I1108 00:36:52.376672   61493 main.go:141] libmachine: (enable-default-cni-010870)   </devices>
	I1108 00:36:52.376685   61493 main.go:141] libmachine: (enable-default-cni-010870) </domain>
	I1108 00:36:52.376702   61493 main.go:141] libmachine: (enable-default-cni-010870) 
	I1108 00:36:52.380243   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:cc:37:1e in network default
	I1108 00:36:52.380927   61493 main.go:141] libmachine: (enable-default-cni-010870) Ensuring networks are active...
	I1108 00:36:52.380968   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:52.381862   61493 main.go:141] libmachine: (enable-default-cni-010870) Ensuring network default is active
	I1108 00:36:52.382290   61493 main.go:141] libmachine: (enable-default-cni-010870) Ensuring network mk-enable-default-cni-010870 is active
	I1108 00:36:52.382936   61493 main.go:141] libmachine: (enable-default-cni-010870) Getting domain xml...
	I1108 00:36:52.383790   61493 main.go:141] libmachine: (enable-default-cni-010870) Creating domain...
	I1108 00:36:53.944625   61493 main.go:141] libmachine: (enable-default-cni-010870) Waiting to get IP...
	I1108 00:36:53.945629   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:53.946261   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:36:53.946292   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:53.946247   61516 retry.go:31] will retry after 222.957224ms: waiting for machine to come up
	I1108 00:36:54.170980   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:54.171552   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:36:54.171596   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:54.171498   61516 retry.go:31] will retry after 384.697886ms: waiting for machine to come up
	I1108 00:36:54.558219   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:54.558717   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:36:54.558755   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:54.558672   61516 retry.go:31] will retry after 353.697283ms: waiting for machine to come up
	I1108 00:36:54.915420   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:54.915948   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:36:54.915979   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:54.915883   61516 retry.go:31] will retry after 511.383074ms: waiting for machine to come up
	I1108 00:36:55.428683   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:55.429207   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:36:55.429240   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:55.429166   61516 retry.go:31] will retry after 601.641524ms: waiting for machine to come up
	I1108 00:36:56.032447   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:56.033085   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:36:56.033116   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:56.033011   61516 retry.go:31] will retry after 900.346677ms: waiting for machine to come up
	I1108 00:36:56.327340   58234 node_ready.go:58] node "calico-010870" has status "Ready":"False"
	I1108 00:36:58.328682   58234 node_ready.go:49] node "calico-010870" has status "Ready":"True"
	I1108 00:36:58.328714   58234 node_ready.go:38] duration metric: took 10.548584602s waiting for node "calico-010870" to be "Ready" ...
	I1108 00:36:58.328725   58234 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:36:58.343296   58234 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-558d465845-bxcnr" in "kube-system" namespace to be "Ready" ...
	I1108 00:37:00.367423   58234 pod_ready.go:102] pod "calico-kube-controllers-558d465845-bxcnr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:37:00.474936   59816 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002847 seconds
	I1108 00:37:00.475132   59816 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:37:00.496790   59816 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:37:01.028157   59816 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:37:01.028441   59816 kubeadm.go:322] [mark-control-plane] Marking the node custom-flannel-010870 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:37:01.545691   59816 kubeadm.go:322] [bootstrap-token] Using token: 8l0yhz.kw2d3firsp3b5dab
	I1108 00:36:56.935230   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:56.935776   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:36:56.935806   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:56.935726   61516 retry.go:31] will retry after 1.134102751s: waiting for machine to come up
	I1108 00:36:58.072123   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:58.072589   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:36:58.072620   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:58.072526   61516 retry.go:31] will retry after 1.062220403s: waiting for machine to come up
	I1108 00:36:59.136282   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:36:59.136898   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:36:59.136929   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:36:59.136840   61516 retry.go:31] will retry after 1.594331033s: waiting for machine to come up
	I1108 00:37:00.733004   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:37:00.733590   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:37:00.733620   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:37:00.733536   61516 retry.go:31] will retry after 1.854301472s: waiting for machine to come up
	I1108 00:37:01.547068   59816 out.go:204]   - Configuring RBAC rules ...
	I1108 00:37:01.547206   59816 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:37:01.566631   59816 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:37:01.588541   59816 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:37:01.594019   59816 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:37:01.603000   59816 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:37:01.608272   59816 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:37:01.659770   59816 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:37:02.142132   59816 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:37:02.238768   59816 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:37:02.241150   59816 kubeadm.go:322] 
	I1108 00:37:02.241238   59816 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:37:02.241255   59816 kubeadm.go:322] 
	I1108 00:37:02.241351   59816 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:37:02.241365   59816 kubeadm.go:322] 
	I1108 00:37:02.241392   59816 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:37:02.241456   59816 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:37:02.241519   59816 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:37:02.241529   59816 kubeadm.go:322] 
	I1108 00:37:02.241598   59816 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:37:02.241609   59816 kubeadm.go:322] 
	I1108 00:37:02.241681   59816 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:37:02.241691   59816 kubeadm.go:322] 
	I1108 00:37:02.241753   59816 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:37:02.241842   59816 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:37:02.241922   59816 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:37:02.241934   59816 kubeadm.go:322] 
	I1108 00:37:02.242024   59816 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:37:02.242116   59816 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:37:02.242138   59816 kubeadm.go:322] 
	I1108 00:37:02.242228   59816 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8l0yhz.kw2d3firsp3b5dab \
	I1108 00:37:02.242342   59816 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:37:02.242366   59816 kubeadm.go:322] 	--control-plane 
	I1108 00:37:02.242371   59816 kubeadm.go:322] 
	I1108 00:37:02.242461   59816 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:37:02.242467   59816 kubeadm.go:322] 
	I1108 00:37:02.242560   59816 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8l0yhz.kw2d3firsp3b5dab \
	I1108 00:37:02.242682   59816 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:37:02.243007   59816 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:37:02.243037   59816 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1108 00:37:02.245975   59816 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1108 00:37:02.367617   58234 pod_ready.go:102] pod "calico-kube-controllers-558d465845-bxcnr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:37:04.865324   58234 pod_ready.go:102] pod "calico-kube-controllers-558d465845-bxcnr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:37:02.247853   59816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1108 00:37:02.247908   59816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I1108 00:37:02.287117   59816 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1108 00:37:02.287215   59816 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1108 00:37:02.357418   59816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 00:37:03.710996   59816 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.353541625s)
	I1108 00:37:03.711060   59816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:37:03.711185   59816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:37:03.711245   59816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=custom-flannel-010870 minikube.k8s.io/updated_at=2023_11_08T00_37_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:37:03.892188   59816 ops.go:34] apiserver oom_adj: -16
	I1108 00:37:03.892297   59816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:37:03.993957   59816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:37:04.601550   59816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:37:05.101937   59816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:37:05.600961   59816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:37:06.101704   59816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:37:02.589364   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:37:02.590015   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:37:02.590131   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:37:02.590087   61516 retry.go:31] will retry after 2.159503693s: waiting for machine to come up
	I1108 00:37:04.751907   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | domain enable-default-cni-010870 has defined MAC address 52:54:00:b2:7a:7f in network mk-enable-default-cni-010870
	I1108 00:37:04.752493   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | unable to find current IP address of domain enable-default-cni-010870 in network mk-enable-default-cni-010870
	I1108 00:37:04.752529   61493 main.go:141] libmachine: (enable-default-cni-010870) DBG | I1108 00:37:04.752435   61516 retry.go:31] will retry after 2.331810075s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-08 00:13:32 UTC, ends at Wed 2023-11-08 00:37:10 UTC. --
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.021231172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403830021206192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2f24108c-a39b-445a-ab3c-7fb313b5d72a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.022257482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=122d899d-7425-463c-b010-cddfd510229d name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.022325589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=122d899d-7425-463c-b010-cddfd510229d name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.022636840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baa241fce7c43bab30bd0b77cd3079988292b3e06d253102ef620bdef914922,PodSandboxId:2e86de6acbdd982b5e175f4dd08f28c8b8decc5748c7f2d2d7dbd5a73648b647,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402743441274286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cace2ff-d7cd-4d31-9f11-d410bc675cbf,},Annotations:map[string]string{io.kubernetes.container.hash: 64da2d49,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d948d1c69c70129d55ba50eaf0b2a16b8e4028908ace6c6a852a93ffd3ca5,PodSandboxId:bc2ef5da14b350463f9dd7ed1fb741b709c54643cbd7ed430933d11b14672ca5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402742789255094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhdhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405b26b9-e6b3-440d-8f28-60db650079a8,},Annotations:map[string]string{io.kubernetes.container.hash: 66eccec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099e79a93f06647861e8ac86286ab0091d838e8e4c69779995ea7de641c854c3,PodSandboxId:0384e739371ad111505093202f0b03033263785760b05f37c0ee5964a654a203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402741615317724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tt9sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964a0552-9be0-4dbb-9a2f-0be3c93b8f83,},Annotations:map[string]string{io.kubernetes.container.hash: 2d6995fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3200ebf1e5c1d240f0d732419ba5107161506fa65d9572379bc6b978322da4,PodSandboxId:40a6f301c41c87f156c13dbbba5bb9903d60faa20966bb8cf515713e46b75e31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402717661773400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8386c5fbded7d9148
0b4ab5948c70416,},Annotations:map[string]string{io.kubernetes.container.hash: f0eaf05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9886e2d0bcb1f12980973b77af67452b7878638c5ff2d9ac0540bf4332f10392,PodSandboxId:0084df71fd8718c5b64b976397d055f8347073777c01d14022cb905a1d34775f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402717506641933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66be78a13c9085fed5
3443574bd068ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e89ecbf60951682cdc5f067fe7b5302ef77673247eeee26a25e9835f9bff4b,PodSandboxId:fb393347badea335742a064fcc564c65cb9eeefc13a09420d0479239f7572b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402717115241176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14855cec42a18ea4b2
c790ced4285e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 9183250f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f1d88317e3e89285ca556ec4ee523b694a605081d65d8f6e27d627099ab0fb,PodSandboxId:cd45b30c32332993a313c682436b1ec33c74b2f8706d5ff283ae8d27103f9bb8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402717044250777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f2addd0e9156fe002e814e1d06076f53,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=122d899d-7425-463c-b010-cddfd510229d name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.072717236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a72832b8-6e26-495a-b046-17034dab056b name=/runtime.v1.RuntimeService/Version
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.072797941Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a72832b8-6e26-495a-b046-17034dab056b name=/runtime.v1.RuntimeService/Version
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.074411846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=58d99007-d62b-4066-914e-ffc47bc23ea4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.074974468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403830074953203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=58d99007-d62b-4066-914e-ffc47bc23ea4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.075607919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=be00a4d8-d1a4-4d86-b5e8-a398cf319184 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.075704596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=be00a4d8-d1a4-4d86-b5e8-a398cf319184 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.075931381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baa241fce7c43bab30bd0b77cd3079988292b3e06d253102ef620bdef914922,PodSandboxId:2e86de6acbdd982b5e175f4dd08f28c8b8decc5748c7f2d2d7dbd5a73648b647,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402743441274286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cace2ff-d7cd-4d31-9f11-d410bc675cbf,},Annotations:map[string]string{io.kubernetes.container.hash: 64da2d49,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d948d1c69c70129d55ba50eaf0b2a16b8e4028908ace6c6a852a93ffd3ca5,PodSandboxId:bc2ef5da14b350463f9dd7ed1fb741b709c54643cbd7ed430933d11b14672ca5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402742789255094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhdhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405b26b9-e6b3-440d-8f28-60db650079a8,},Annotations:map[string]string{io.kubernetes.container.hash: 66eccec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099e79a93f06647861e8ac86286ab0091d838e8e4c69779995ea7de641c854c3,PodSandboxId:0384e739371ad111505093202f0b03033263785760b05f37c0ee5964a654a203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402741615317724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tt9sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964a0552-9be0-4dbb-9a2f-0be3c93b8f83,},Annotations:map[string]string{io.kubernetes.container.hash: 2d6995fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3200ebf1e5c1d240f0d732419ba5107161506fa65d9572379bc6b978322da4,PodSandboxId:40a6f301c41c87f156c13dbbba5bb9903d60faa20966bb8cf515713e46b75e31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402717661773400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8386c5fbded7d9148
0b4ab5948c70416,},Annotations:map[string]string{io.kubernetes.container.hash: f0eaf05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9886e2d0bcb1f12980973b77af67452b7878638c5ff2d9ac0540bf4332f10392,PodSandboxId:0084df71fd8718c5b64b976397d055f8347073777c01d14022cb905a1d34775f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402717506641933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66be78a13c9085fed5
3443574bd068ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e89ecbf60951682cdc5f067fe7b5302ef77673247eeee26a25e9835f9bff4b,PodSandboxId:fb393347badea335742a064fcc564c65cb9eeefc13a09420d0479239f7572b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402717115241176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14855cec42a18ea4b2
c790ced4285e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 9183250f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f1d88317e3e89285ca556ec4ee523b694a605081d65d8f6e27d627099ab0fb,PodSandboxId:cd45b30c32332993a313c682436b1ec33c74b2f8706d5ff283ae8d27103f9bb8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402717044250777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f2addd0e9156fe002e814e1d06076f53,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=be00a4d8-d1a4-4d86-b5e8-a398cf319184 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.126128958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=23d423d2-ec80-44be-affd-8c63654421ae name=/runtime.v1.RuntimeService/Version
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.126248432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=23d423d2-ec80-44be-affd-8c63654421ae name=/runtime.v1.RuntimeService/Version
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.128503587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e405bdd9-5d67-455e-836a-5fb7a78a3db3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.129025961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403830129010403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e405bdd9-5d67-455e-836a-5fb7a78a3db3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.130573618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b380d6a5-0557-434e-9b5e-9604c68d8e42 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.130666329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b380d6a5-0557-434e-9b5e-9604c68d8e42 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.130884212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baa241fce7c43bab30bd0b77cd3079988292b3e06d253102ef620bdef914922,PodSandboxId:2e86de6acbdd982b5e175f4dd08f28c8b8decc5748c7f2d2d7dbd5a73648b647,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402743441274286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cace2ff-d7cd-4d31-9f11-d410bc675cbf,},Annotations:map[string]string{io.kubernetes.container.hash: 64da2d49,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d948d1c69c70129d55ba50eaf0b2a16b8e4028908ace6c6a852a93ffd3ca5,PodSandboxId:bc2ef5da14b350463f9dd7ed1fb741b709c54643cbd7ed430933d11b14672ca5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402742789255094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhdhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405b26b9-e6b3-440d-8f28-60db650079a8,},Annotations:map[string]string{io.kubernetes.container.hash: 66eccec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099e79a93f06647861e8ac86286ab0091d838e8e4c69779995ea7de641c854c3,PodSandboxId:0384e739371ad111505093202f0b03033263785760b05f37c0ee5964a654a203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402741615317724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tt9sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964a0552-9be0-4dbb-9a2f-0be3c93b8f83,},Annotations:map[string]string{io.kubernetes.container.hash: 2d6995fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3200ebf1e5c1d240f0d732419ba5107161506fa65d9572379bc6b978322da4,PodSandboxId:40a6f301c41c87f156c13dbbba5bb9903d60faa20966bb8cf515713e46b75e31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402717661773400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8386c5fbded7d9148
0b4ab5948c70416,},Annotations:map[string]string{io.kubernetes.container.hash: f0eaf05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9886e2d0bcb1f12980973b77af67452b7878638c5ff2d9ac0540bf4332f10392,PodSandboxId:0084df71fd8718c5b64b976397d055f8347073777c01d14022cb905a1d34775f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402717506641933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66be78a13c9085fed5
3443574bd068ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e89ecbf60951682cdc5f067fe7b5302ef77673247eeee26a25e9835f9bff4b,PodSandboxId:fb393347badea335742a064fcc564c65cb9eeefc13a09420d0479239f7572b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402717115241176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14855cec42a18ea4b2
c790ced4285e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 9183250f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f1d88317e3e89285ca556ec4ee523b694a605081d65d8f6e27d627099ab0fb,PodSandboxId:cd45b30c32332993a313c682436b1ec33c74b2f8706d5ff283ae8d27103f9bb8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402717044250777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f2addd0e9156fe002e814e1d06076f53,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b380d6a5-0557-434e-9b5e-9604c68d8e42 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.177303954Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f1e44ae0-52c7-4f2a-a2c8-0e7ee88f0417 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.177568126Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f1e44ae0-52c7-4f2a-a2c8-0e7ee88f0417 name=/runtime.v1.RuntimeService/Version
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.179685526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e5a033bb-9691-4f5c-a05f-de7237750c75 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.180462756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403830180441850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e5a033bb-9691-4f5c-a05f-de7237750c75 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.181309444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=40f8ff53-69c2-45dd-8ef7-a50667dfb6e2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.181471150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=40f8ff53-69c2-45dd-8ef7-a50667dfb6e2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:37:10 default-k8s-diff-port-039263 crio[714]: time="2023-11-08 00:37:10.181696877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3baa241fce7c43bab30bd0b77cd3079988292b3e06d253102ef620bdef914922,PodSandboxId:2e86de6acbdd982b5e175f4dd08f28c8b8decc5748c7f2d2d7dbd5a73648b647,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402743441274286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cace2ff-d7cd-4d31-9f11-d410bc675cbf,},Annotations:map[string]string{io.kubernetes.container.hash: 64da2d49,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d948d1c69c70129d55ba50eaf0b2a16b8e4028908ace6c6a852a93ffd3ca5,PodSandboxId:bc2ef5da14b350463f9dd7ed1fb741b709c54643cbd7ed430933d11b14672ca5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8,State:CONTAINER_RUNNING,CreatedAt:1699402742789255094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhdhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 405b26b9-e6b3-440d-8f28-60db650079a8,},Annotations:map[string]string{io.kubernetes.container.hash: 66eccec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:099e79a93f06647861e8ac86286ab0091d838e8e4c69779995ea7de641c854c3,PodSandboxId:0384e739371ad111505093202f0b03033263785760b05f37c0ee5964a654a203,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1699402741615317724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tt9sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964a0552-9be0-4dbb-9a2f-0be3c93b8f83,},Annotations:map[string]string{io.kubernetes.container.hash: 2d6995fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc3200ebf1e5c1d240f0d732419ba5107161506fa65d9572379bc6b978322da4,PodSandboxId:40a6f301c41c87f156c13dbbba5bb9903d60faa20966bb8cf515713e46b75e31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1699402717661773400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8386c5fbded7d9148
0b4ab5948c70416,},Annotations:map[string]string{io.kubernetes.container.hash: f0eaf05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9886e2d0bcb1f12980973b77af67452b7878638c5ff2d9ac0540bf4332f10392,PodSandboxId:0084df71fd8718c5b64b976397d055f8347073777c01d14022cb905a1d34775f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725,State:CONTAINER_RUNNING,CreatedAt:1699402717506641933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66be78a13c9085fed5
3443574bd068ff,},Annotations:map[string]string{io.kubernetes.container.hash: 1a68c1c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e89ecbf60951682cdc5f067fe7b5302ef77673247eeee26a25e9835f9bff4b,PodSandboxId:fb393347badea335742a064fcc564c65cb9eeefc13a09420d0479239f7572b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab,State:CONTAINER_RUNNING,CreatedAt:1699402717115241176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14855cec42a18ea4b2
c790ced4285e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 9183250f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18f1d88317e3e89285ca556ec4ee523b694a605081d65d8f6e27d627099ab0fb,PodSandboxId:cd45b30c32332993a313c682436b1ec33c74b2f8706d5ff283ae8d27103f9bb8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707,State:CONTAINER_RUNNING,CreatedAt:1699402717044250777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-039263,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: f2addd0e9156fe002e814e1d06076f53,},Annotations:map[string]string{io.kubernetes.container.hash: b07a2201,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=40f8ff53-69c2-45dd-8ef7-a50667dfb6e2 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3baa241fce7c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   2e86de6acbdd9       storage-provisioner
	553d948d1c69c       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   18 minutes ago      Running             kube-proxy                0                   bc2ef5da14b35       kube-proxy-rhdhg
	099e79a93f066       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 minutes ago      Running             coredns                   0                   0384e739371ad       coredns-5dd5756b68-tt9sm
	dc3200ebf1e5c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   18 minutes ago      Running             etcd                      2                   40a6f301c41c8       etcd-default-k8s-diff-port-039263
	9886e2d0bcb1f       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   18 minutes ago      Running             kube-scheduler            2                   0084df71fd871       kube-scheduler-default-k8s-diff-port-039263
	82e89ecbf6095       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   18 minutes ago      Running             kube-apiserver            2                   fb393347badea       kube-apiserver-default-k8s-diff-port-039263
	18f1d88317e3e       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   18 minutes ago      Running             kube-controller-manager   2                   cd45b30c32332       kube-controller-manager-default-k8s-diff-port-039263
	
	* 
	* ==> coredns [099e79a93f06647861e8ac86286ab0091d838e8e4c69779995ea7de641c854c3] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-039263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-039263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=default-k8s-diff-port-039263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T00_18_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 00:18:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-039263
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 00:37:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:34:23 +0000   Wed, 08 Nov 2023 00:18:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:34:23 +0000   Wed, 08 Nov 2023 00:18:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:34:23 +0000   Wed, 08 Nov 2023 00:18:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:34:23 +0000   Wed, 08 Nov 2023 00:18:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.116
	  Hostname:    default-k8s-diff-port-039263
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae2a601a9aa5456da7ba4055df3e6884
	  System UUID:                ae2a601a-9aa5-456d-a7ba-4055df3e6884
	  Boot ID:                    4cd87de2-2e03-4df0-ad9e-9645a9503d64
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-tt9sm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-default-k8s-diff-port-039263                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-039263             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-039263    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-rhdhg                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-039263             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-57f55c9bc5-j6t7g                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node default-k8s-diff-port-039263 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             18m                kubelet          Node default-k8s-diff-port-039263 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m                kubelet          Node default-k8s-diff-port-039263 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-039263 event: Registered Node default-k8s-diff-port-039263 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov 8 00:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067766] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.576208] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.525977] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152299] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.494811] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.394285] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.125489] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.171000] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.125389] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.268178] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[Nov 8 00:14] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[ +19.570464] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 8 00:18] systemd-fstab-generator[3512]: Ignoring "noauto" for root device
	[ +10.307595] systemd-fstab-generator[3836]: Ignoring "noauto" for root device
	[ +13.835443] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [dc3200ebf1e5c1d240f0d732419ba5107161506fa65d9572379bc6b978322da4] <==
	* {"level":"info","ts":"2023-11-08T00:18:39.616695Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:39.617848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T00:18:39.618431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:39.618455Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T00:18:39.619766Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"64fcf4fe45fcdc82","local-member-id":"86db9aa99badf4aa","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:39.619884Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:18:39.619905Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T00:28:39.89355Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2023-11-08T00:28:39.900164Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":724,"took":"6.171503ms","hash":381021562}
	{"level":"info","ts":"2023-11-08T00:28:39.900432Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":381021562,"revision":724,"compact-revision":-1}
	{"level":"info","ts":"2023-11-08T00:32:53.897258Z","caller":"traceutil/trace.go:171","msg":"trace[69985436] transaction","detail":"{read_only:false; response_revision:1174; number_of_response:1; }","duration":"201.575069ms","start":"2023-11-08T00:32:53.695628Z","end":"2023-11-08T00:32:53.897203Z","steps":["trace[69985436] 'process raft request'  (duration: 139.195953ms)","trace[69985436] 'compare'  (duration: 61.813681ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-08T00:32:54.712096Z","caller":"traceutil/trace.go:171","msg":"trace[560577426] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"161.08456ms","start":"2023-11-08T00:32:54.550996Z","end":"2023-11-08T00:32:54.71208Z","steps":["trace[560577426] 'process raft request'  (duration: 160.917991ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:33:39.90478Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":967}
	{"level":"info","ts":"2023-11-08T00:33:39.906593Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":967,"took":"1.508481ms","hash":307489805}
	{"level":"info","ts":"2023-11-08T00:33:39.906652Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":307489805,"revision":967,"compact-revision":724}
	{"level":"info","ts":"2023-11-08T00:34:05.297709Z","caller":"traceutil/trace.go:171","msg":"trace[907385010] transaction","detail":"{read_only:false; response_revision:1232; number_of_response:1; }","duration":"157.126221ms","start":"2023-11-08T00:34:05.140544Z","end":"2023-11-08T00:34:05.29767Z","steps":["trace[907385010] 'process raft request'  (duration: 157.006164ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:35:07.775195Z","caller":"traceutil/trace.go:171","msg":"trace[1929420834] transaction","detail":"{read_only:false; response_revision:1283; number_of_response:1; }","duration":"116.105729ms","start":"2023-11-08T00:35:07.659057Z","end":"2023-11-08T00:35:07.775163Z","steps":["trace[1929420834] 'process raft request'  (duration: 115.8555ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:35:09.908704Z","caller":"traceutil/trace.go:171","msg":"trace[2075656596] linearizableReadLoop","detail":"{readStateIndex:1496; appliedIndex:1495; }","duration":"127.941056ms","start":"2023-11-08T00:35:09.780746Z","end":"2023-11-08T00:35:09.908688Z","steps":["trace[2075656596] 'read index received'  (duration: 127.77844ms)","trace[2075656596] 'applied index is now lower than readState.Index'  (duration: 162.036µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-08T00:35:09.908809Z","caller":"traceutil/trace.go:171","msg":"trace[282652657] transaction","detail":"{read_only:false; response_revision:1284; number_of_response:1; }","duration":"203.95874ms","start":"2023-11-08T00:35:09.704841Z","end":"2023-11-08T00:35:09.9088Z","steps":["trace[282652657] 'process raft request'  (duration: 203.672909ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T00:35:09.909155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.344328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2023-11-08T00:35:09.90947Z","caller":"traceutil/trace.go:171","msg":"trace[220525919] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1284; }","duration":"128.73496ms","start":"2023-11-08T00:35:09.780723Z","end":"2023-11-08T00:35:09.909458Z","steps":["trace[220525919] 'agreement among raft nodes before linearized reading'  (duration: 128.303548ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T00:35:36.370191Z","caller":"traceutil/trace.go:171","msg":"trace[985343185] transaction","detail":"{read_only:false; response_revision:1308; number_of_response:1; }","duration":"110.21321ms","start":"2023-11-08T00:35:36.259937Z","end":"2023-11-08T00:35:36.37015Z","steps":["trace[985343185] 'process raft request'  (duration: 109.554311ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T00:36:43.876104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.493744ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17630057263414641335 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.116\" mod_revision:1352 > success:<request_put:<key:\"/registry/masterleases/192.168.72.116\" value_size:67 lease:8406685226559865525 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.116\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-08T00:36:43.876812Z","caller":"traceutil/trace.go:171","msg":"trace[1771264508] transaction","detail":"{read_only:false; response_revision:1360; number_of_response:1; }","duration":"168.878958ms","start":"2023-11-08T00:36:43.707899Z","end":"2023-11-08T00:36:43.876778Z","steps":["trace[1771264508] 'process raft request'  (duration: 64.922862ms)","trace[1771264508] 'compare'  (duration: 96.771305ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-08T00:36:46.942267Z","caller":"traceutil/trace.go:171","msg":"trace[1381562367] transaction","detail":"{read_only:false; response_revision:1363; number_of_response:1; }","duration":"174.714315ms","start":"2023-11-08T00:36:46.767531Z","end":"2023-11-08T00:36:46.942245Z","steps":["trace[1381562367] 'process raft request'  (duration: 174.24337ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  00:37:10 up 23 min,  0 users,  load average: 0.16, 0.36, 0.30
	Linux default-k8s-diff-port-039263 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [82e89ecbf60951682cdc5f067fe7b5302ef77673247eeee26a25e9835f9bff4b] <==
	* I1108 00:33:41.729074       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:33:42.729183       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:33:42.729255       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:33:42.729267       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:33:42.729442       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:33:42.729571       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:33:42.730522       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:34:41.590061       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:34:42.729979       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:34:42.730031       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:34:42.730041       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:34:42.731144       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:34:42.731288       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:34:42.731326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:35:41.590990       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1108 00:36:41.589947       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1108 00:36:42.730974       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:36:42.731066       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1108 00:36:42.731084       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 00:36:42.732135       1 handler_proxy.go:93] no RequestInfo found in the context
	E1108 00:36:42.732307       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:36:42.732459       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [18f1d88317e3e89285ca556ec4ee523b694a605081d65d8f6e27d627099ab0fb] <==
	* I1108 00:31:28.598475       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:31:58.049675       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:31:58.607915       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:32:28.055258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:32:28.616525       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:32:58.062217       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:32:58.629732       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:33:28.068155       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:33:28.640658       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:33:58.078954       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:33:58.650467       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:34:28.092009       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:34:28.662665       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:34:58.099462       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:34:58.674583       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1108 00:35:09.914900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="396.997µs"
	I1108 00:35:21.713229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="228µs"
	E1108 00:35:28.105451       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:35:28.683990       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:35:58.111884       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:35:58.692730       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:36:28.119769       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:36:28.703476       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1108 00:36:58.130220       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1108 00:36:58.715703       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [553d948d1c69c70129d55ba50eaf0b2a16b8e4028908ace6c6a852a93ffd3ca5] <==
	* I1108 00:19:03.586848       1 server_others.go:69] "Using iptables proxy"
	I1108 00:19:03.622479       1 node.go:141] Successfully retrieved node IP: 192.168.72.116
	I1108 00:19:03.706581       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 00:19:03.706736       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 00:19:03.711901       1 server_others.go:152] "Using iptables Proxier"
	I1108 00:19:03.712880       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 00:19:03.714199       1 server.go:846] "Version info" version="v1.28.3"
	I1108 00:19:03.714242       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 00:19:03.716294       1 config.go:188] "Starting service config controller"
	I1108 00:19:03.716762       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 00:19:03.716834       1 config.go:97] "Starting endpoint slice config controller"
	I1108 00:19:03.716861       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 00:19:03.718523       1 config.go:315] "Starting node config controller"
	I1108 00:19:03.718664       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 00:19:03.817857       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 00:19:03.817894       1 shared_informer.go:318] Caches are synced for service config
	I1108 00:19:03.819282       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [9886e2d0bcb1f12980973b77af67452b7878638c5ff2d9ac0540bf4332f10392] <==
	* W1108 00:18:42.590731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:18:42.590804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 00:18:42.644880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:42.645001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:42.783201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 00:18:42.783273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1108 00:18:42.836710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:42.836784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:42.900654       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:42.900732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:43.020963       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 00:18:43.021078       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 00:18:43.041263       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 00:18:43.041398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 00:18:43.078744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:18:43.078814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 00:18:43.078911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:18:43.078927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 00:18:43.104153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 00:18:43.104274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 00:18:43.155843       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:18:43.155896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 00:18:43.160232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 00:18:43.160279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1108 00:18:46.130450       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 00:13:32 UTC, ends at Wed 2023-11-08 00:37:10 UTC. --
	Nov 08 00:34:45 default-k8s-diff-port-039263 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:34:45 default-k8s-diff-port-039263 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:34:45 default-k8s-diff-port-039263 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:34:58 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:34:58.711021    3843 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 08 00:34:58 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:34:58.711067    3843 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 08 00:34:58 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:34:58.711921    3843 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-j4pg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-j6t7g_kube-system(5c0e827c-8281-4b51-b0c7-d43d0aa22e29): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 08 00:34:58 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:34:58.711966    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:35:09 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:35:09.692160    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:35:21 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:35:21.693174    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:35:33 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:35:33.691095    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:35:45 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:35:45.846455    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:35:45 default-k8s-diff-port-039263 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:35:45 default-k8s-diff-port-039263 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:35:45 default-k8s-diff-port-039263 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:35:47 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:35:47.693271    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:35:59 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:35:59.690855    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:36:13 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:36:13.692508    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:36:26 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:36:26.690817    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:36:41 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:36:41.694787    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:36:45 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:36:45.844780    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 00:36:45 default-k8s-diff-port-039263 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 00:36:45 default-k8s-diff-port-039263 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 00:36:45 default-k8s-diff-port-039263 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 00:36:54 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:36:54.691658    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	Nov 08 00:37:07 default-k8s-diff-port-039263 kubelet[3843]: E1108 00:37:07.690777    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j6t7g" podUID="5c0e827c-8281-4b51-b0c7-d43d0aa22e29"
	
	* 
	* ==> storage-provisioner [3baa241fce7c43bab30bd0b77cd3079988292b3e06d253102ef620bdef914922] <==
	* I1108 00:19:03.603266       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 00:19:03.623479       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 00:19:03.623656       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 00:19:03.640018       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 00:19:03.640175       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-039263_ce9c89a2-842e-4265-aad4-e729b6e29abf!
	I1108 00:19:03.641282       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f8aca3c7-5434-4066-adcb-dd1d0fd2b186", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-039263_ce9c89a2-842e-4265-aad4-e729b6e29abf became leader
	I1108 00:19:03.740836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-039263_ce9c89a2-842e-4265-aad4-e729b6e29abf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-039263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-j6t7g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-039263 describe pod metrics-server-57f55c9bc5-j6t7g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-039263 describe pod metrics-server-57f55c9bc5-j6t7g: exit status 1 (65.28715ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-j6t7g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-039263 describe pod metrics-server-57f55c9bc5-j6t7g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (130.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1108 00:30:16.920900   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1108 00:30:38.956866   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1108 00:30:42.433987   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-590541 -n old-k8s-version-590541
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-08 00:32:16.523396147 +0000 UTC m=+5470.686705023
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-590541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-590541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.304µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-590541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-590541 -n old-k8s-version-590541
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-590541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-590541 logs -n 25: (1.517001681s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-161055                           | kubernetes-upgrade-161055    | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:04 UTC |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:04 UTC | 08 Nov 23 00:05 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-590541        | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-484343                              | cert-expiration-484343       | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:05 UTC |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:05 UTC | 08 Nov 23 00:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-320390             | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-253253            | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC | 08 Nov 23 00:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-688874                              | stopped-upgrade-688874       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-560216 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:07 UTC |
	|         | disable-driver-mounts-560216                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:09 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-590541             | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-590541                              | old-k8s-version-590541       | jenkins | v1.32.0 | 08 Nov 23 00:07 UTC | 08 Nov 23 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320390                  | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-253253                 | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320390                                   | no-preload-320390            | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-253253                                  | embed-certs-253253           | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-039263  | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC | 08 Nov 23 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:09 UTC |                     |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-039263       | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-039263 | jenkins | v1.32.0 | 08 Nov 23 00:12 UTC | 08 Nov 23 00:19 UTC |
	|         | default-k8s-diff-port-039263                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 00:12:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 00:12:00.921478   51228 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:12:00.921584   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921592   51228 out.go:309] Setting ErrFile to fd 2...
	I1108 00:12:00.921597   51228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:12:00.921752   51228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:12:00.922282   51228 out.go:303] Setting JSON to false
	I1108 00:12:00.923151   51228 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6870,"bootTime":1699395451,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:12:00.923210   51228 start.go:138] virtualization: kvm guest
	I1108 00:12:00.925322   51228 out.go:177] * [default-k8s-diff-port-039263] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:12:00.926718   51228 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:12:00.928030   51228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:12:00.926756   51228 notify.go:220] Checking for updates...
	I1108 00:12:00.930659   51228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:12:00.932049   51228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:12:00.933341   51228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:12:00.934394   51228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:12:00.936334   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:00.936806   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.936857   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.950893   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I1108 00:12:00.951284   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.951775   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.951796   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.952131   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.952308   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:00.952537   51228 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:12:00.952850   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:00.952894   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:00.966402   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I1108 00:12:00.966726   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:00.967218   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:12:00.967238   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:00.967525   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:00.967705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:12:01.002079   51228 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 00:12:01.003352   51228 start.go:298] selected driver: kvm2
	I1108 00:12:01.003362   51228 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:def
ault-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.003471   51228 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:12:01.004117   51228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.004197   51228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 00:12:01.018635   51228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 00:12:01.018987   51228 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 00:12:01.019047   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:12:01.019060   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:12:01.019072   51228 start_flags.go:323] config:
	{Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:12:01.019251   51228 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 00:12:01.021306   51228 out.go:177] * Starting control plane node default-k8s-diff-port-039263 in cluster default-k8s-diff-port-039263
	I1108 00:12:00.865093   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:03.937104   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:01.022723   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:12:01.022765   51228 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1108 00:12:01.022777   51228 cache.go:56] Caching tarball of preloaded images
	I1108 00:12:01.022864   51228 preload.go:174] Found /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 00:12:01.022875   51228 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1108 00:12:01.022984   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:12:01.023164   51228 start.go:365] acquiring machines lock for default-k8s-diff-port-039263: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:10.017091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:13.089091   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:19.169065   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:22.241084   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:28.321050   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:31.393060   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:37.473056   50022 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.49:22: connect: no route to host
	I1108 00:12:40.475708   50505 start.go:369] acquired machines lock for "no-preload-320390" in 3m26.103068871s
	I1108 00:12:40.475773   50505 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:40.475781   50505 fix.go:54] fixHost starting: 
	I1108 00:12:40.476087   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:40.476116   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:40.490309   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45419
	I1108 00:12:40.490708   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:40.491196   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:12:40.491217   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:40.491530   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:40.491718   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:40.491870   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:12:40.493597   50505 fix.go:102] recreateIfNeeded on no-preload-320390: state=Stopped err=<nil>
	I1108 00:12:40.493628   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	W1108 00:12:40.493762   50505 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:40.495670   50505 out.go:177] * Restarting existing kvm2 VM for "no-preload-320390" ...
	I1108 00:12:40.496930   50505 main.go:141] libmachine: (no-preload-320390) Calling .Start
	I1108 00:12:40.497098   50505 main.go:141] libmachine: (no-preload-320390) Ensuring networks are active...
	I1108 00:12:40.497753   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network default is active
	I1108 00:12:40.498094   50505 main.go:141] libmachine: (no-preload-320390) Ensuring network mk-no-preload-320390 is active
	I1108 00:12:40.498442   50505 main.go:141] libmachine: (no-preload-320390) Getting domain xml...
	I1108 00:12:40.499199   50505 main.go:141] libmachine: (no-preload-320390) Creating domain...
	I1108 00:12:41.718179   50505 main.go:141] libmachine: (no-preload-320390) Waiting to get IP...
	I1108 00:12:41.719024   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.719423   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.719497   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.719407   51373 retry.go:31] will retry after 204.819851ms: waiting for machine to come up
	I1108 00:12:41.925924   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:41.926414   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:41.926445   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:41.926361   51373 retry.go:31] will retry after 237.59613ms: waiting for machine to come up
	I1108 00:12:42.165848   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.166251   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.166282   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.166195   51373 retry.go:31] will retry after 306.914093ms: waiting for machine to come up
	I1108 00:12:42.474651   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.475026   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.475057   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.474981   51373 retry.go:31] will retry after 490.427385ms: waiting for machine to come up
	I1108 00:12:42.967292   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:42.967709   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:42.967733   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:42.967661   51373 retry.go:31] will retry after 684.227655ms: waiting for machine to come up
	I1108 00:12:43.653384   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:43.653823   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:43.653847   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:43.653774   51373 retry.go:31] will retry after 640.101868ms: waiting for machine to come up
	I1108 00:12:40.473798   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:40.473838   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:12:40.475605   50022 machine.go:91] provisioned docker machine in 4m37.566672036s
	I1108 00:12:40.475639   50022 fix.go:56] fixHost completed within 4m37.589859084s
	I1108 00:12:40.475644   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 4m37.589890946s
	W1108 00:12:40.475670   50022 start.go:691] error starting host: provision: host is not running
	W1108 00:12:40.475777   50022 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1108 00:12:40.475788   50022 start.go:706] Will try again in 5 seconds ...
	I1108 00:12:44.295060   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:44.295559   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:44.295610   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:44.295506   51373 retry.go:31] will retry after 797.709386ms: waiting for machine to come up
	I1108 00:12:45.095135   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:45.095552   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:45.095575   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:45.095476   51373 retry.go:31] will retry after 1.052157242s: waiting for machine to come up
	I1108 00:12:46.149040   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:46.149393   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:46.149426   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:46.149336   51373 retry.go:31] will retry after 1.246701556s: waiting for machine to come up
	I1108 00:12:47.397579   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:47.397942   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:47.397981   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:47.397900   51373 retry.go:31] will retry after 1.742754262s: waiting for machine to come up
	I1108 00:12:49.142995   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:49.143390   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:49.143419   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:49.143349   51373 retry.go:31] will retry after 2.412997156s: waiting for machine to come up
	I1108 00:12:45.476072   50022 start.go:365] acquiring machines lock for old-k8s-version-590541: {Name:mkf032f30be570950285b6e092e75fb29cc3d166 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 00:12:51.558471   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:51.558857   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:51.558880   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:51.558809   51373 retry.go:31] will retry after 3.169873944s: waiting for machine to come up
	I1108 00:12:54.732010   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:54.732320   50505 main.go:141] libmachine: (no-preload-320390) DBG | unable to find current IP address of domain no-preload-320390 in network mk-no-preload-320390
	I1108 00:12:54.732340   50505 main.go:141] libmachine: (no-preload-320390) DBG | I1108 00:12:54.732292   51373 retry.go:31] will retry after 3.452837487s: waiting for machine to come up
	I1108 00:12:58.188516   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.188983   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has current primary IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.189014   50505 main.go:141] libmachine: (no-preload-320390) Found IP for machine: 192.168.61.176
	I1108 00:12:58.189036   50505 main.go:141] libmachine: (no-preload-320390) Reserving static IP address...
	I1108 00:12:58.189332   50505 main.go:141] libmachine: (no-preload-320390) Reserved static IP address: 192.168.61.176
	I1108 00:12:58.189364   50505 main.go:141] libmachine: (no-preload-320390) Waiting for SSH to be available...
	I1108 00:12:58.189388   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.189415   50505 main.go:141] libmachine: (no-preload-320390) DBG | skip adding static IP to network mk-no-preload-320390 - found existing host DHCP lease matching {name: "no-preload-320390", mac: "52:54:00:0f:d8:91", ip: "192.168.61.176"}
	I1108 00:12:58.189432   50505 main.go:141] libmachine: (no-preload-320390) DBG | Getting to WaitForSSH function...
	I1108 00:12:58.191264   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191565   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.191598   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.191730   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH client type: external
	I1108 00:12:58.191760   50505 main.go:141] libmachine: (no-preload-320390) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa (-rw-------)
	I1108 00:12:58.191794   50505 main.go:141] libmachine: (no-preload-320390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:12:58.191808   50505 main.go:141] libmachine: (no-preload-320390) DBG | About to run SSH command:
	I1108 00:12:58.191819   50505 main.go:141] libmachine: (no-preload-320390) DBG | exit 0
	I1108 00:12:58.284621   50505 main.go:141] libmachine: (no-preload-320390) DBG | SSH cmd err, output: <nil>: 
	I1108 00:12:58.284983   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetConfigRaw
	I1108 00:12:58.285600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.287966   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288289   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.288325   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.288532   50505 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/config.json ...
	I1108 00:12:58.288712   50505 machine.go:88] provisioning docker machine ...
	I1108 00:12:58.288732   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:58.288917   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289074   50505 buildroot.go:166] provisioning hostname "no-preload-320390"
	I1108 00:12:58.289097   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.289217   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.291053   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291329   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.291358   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.291460   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.291613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291749   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.291849   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.292009   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.292394   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.292419   50505 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320390 && echo "no-preload-320390" | sudo tee /etc/hostname
	I1108 00:12:58.433310   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320390
	
	I1108 00:12:58.433333   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.435959   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436351   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.436383   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.436531   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.436710   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436853   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.436959   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.437088   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.437607   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.437633   50505 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320390/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:12:58.578473   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:12:58.578506   50505 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:12:58.578568   50505 buildroot.go:174] setting up certificates
	I1108 00:12:58.578582   50505 provision.go:83] configureAuth start
	I1108 00:12:58.578600   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetMachineName
	I1108 00:12:58.578889   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:58.581534   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581857   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.581881   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.581948   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.583777   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584002   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.584023   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.584121   50505 provision.go:138] copyHostCerts
	I1108 00:12:58.584172   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:12:58.584184   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:12:58.584247   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:12:58.584327   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:12:58.584337   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:12:58.584359   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:12:58.584407   50505 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:12:58.584415   50505 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:12:58.584434   50505 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:12:58.584473   50505 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.no-preload-320390 san=[192.168.61.176 192.168.61.176 localhost 127.0.0.1 minikube no-preload-320390]
	I1108 00:12:58.785035   50505 provision.go:172] copyRemoteCerts
	I1108 00:12:58.785095   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:12:58.785127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.787683   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788001   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.788037   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.788194   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.788363   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.788534   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.788678   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:58.881791   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:12:58.905314   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:12:58.928183   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:12:58.951053   50505 provision.go:86] duration metric: configureAuth took 372.456375ms
	I1108 00:12:58.951079   50505 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:12:58.951288   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:12:58.951368   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:58.953851   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954158   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:58.954182   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:58.954309   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:58.954504   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954689   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:58.954819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:58.954964   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:58.955269   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:58.955283   50505 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:12:59.265311   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:12:59.265342   50505 machine.go:91] provisioned docker machine in 976.618103ms
	I1108 00:12:59.265353   50505 start.go:300] post-start starting for "no-preload-320390" (driver="kvm2")
	I1108 00:12:59.265362   50505 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:12:59.265377   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.265683   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:12:59.265721   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.533994   50613 start.go:369] acquired machines lock for "embed-certs-253253" in 3m37.489465904s
	I1108 00:12:59.534047   50613 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:12:59.534093   50613 fix.go:54] fixHost starting: 
	I1108 00:12:59.534485   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:12:59.534531   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:12:59.553784   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34533
	I1108 00:12:59.554193   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:12:59.554676   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:12:59.554702   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:12:59.555006   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:12:59.555188   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:12:59.555320   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:12:59.556783   50613 fix.go:102] recreateIfNeeded on embed-certs-253253: state=Stopped err=<nil>
	I1108 00:12:59.556804   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	W1108 00:12:59.556989   50613 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:12:59.558834   50613 out.go:177] * Restarting existing kvm2 VM for "embed-certs-253253" ...
	I1108 00:12:59.268378   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268792   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.268836   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.268991   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.269175   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.269337   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.269480   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.363687   50505 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:12:59.368009   50505 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:12:59.368028   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:12:59.368087   50505 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:12:59.368176   50505 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:12:59.368287   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:12:59.377685   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:12:59.399143   50505 start.go:303] post-start completed in 133.780055ms
	I1108 00:12:59.399161   50505 fix.go:56] fixHost completed within 18.923380073s
	I1108 00:12:59.399178   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.401608   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.401977   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.402007   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.402127   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.402315   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402471   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.402650   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.402824   50505 main.go:141] libmachine: Using SSH client type: native
	I1108 00:12:59.403150   50505 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.61.176 22 <nil> <nil>}
	I1108 00:12:59.403162   50505 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:12:59.533831   50505 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402379.481958632
	
	I1108 00:12:59.533852   50505 fix.go:206] guest clock: 1699402379.481958632
	I1108 00:12:59.533859   50505 fix.go:219] Guest: 2023-11-08 00:12:59.481958632 +0000 UTC Remote: 2023-11-08 00:12:59.399164235 +0000 UTC m=+225.183083525 (delta=82.794397ms)
	I1108 00:12:59.533876   50505 fix.go:190] guest clock delta is within tolerance: 82.794397ms
	I1108 00:12:59.533880   50505 start.go:83] releasing machines lock for "no-preload-320390", held for 19.058127295s
	I1108 00:12:59.533902   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.534171   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:12:59.537173   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537616   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.537665   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.537736   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538230   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538431   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:12:59.538517   50505 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:12:59.538613   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.538659   50505 ssh_runner.go:195] Run: cat /version.json
	I1108 00:12:59.538683   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:12:59.541051   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541283   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541438   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541463   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541599   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:12:59.541608   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541634   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:12:59.541775   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.541845   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:12:59.541939   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.541997   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:12:59.542078   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.542093   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:12:59.542265   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:12:59.637947   50505 ssh_runner.go:195] Run: systemctl --version
	I1108 00:12:59.660255   50505 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:12:59.809407   50505 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:12:59.816246   50505 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:12:59.816323   50505 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:12:59.831564   50505 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:12:59.831586   50505 start.go:472] detecting cgroup driver to use...
	I1108 00:12:59.831651   50505 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:12:59.847556   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:12:59.861077   50505 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:12:59.861143   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:12:59.876764   50505 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:12:59.890894   50505 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:00.001947   50505 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:00.121923   50505 docker.go:219] disabling docker service ...
	I1108 00:13:00.122000   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:00.135525   50505 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:00.148130   50505 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:00.259318   50505 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:00.368101   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:00.381138   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:00.398173   50505 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:00.398245   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.407655   50505 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:00.407699   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.416919   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.425767   50505 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:00.434447   50505 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:00.443679   50505 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:00.451581   50505 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:00.451619   50505 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:00.464498   50505 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:00.474332   50505 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:00.599521   50505 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:00.770248   50505 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:00.770341   50505 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:00.775707   50505 start.go:540] Will wait 60s for crictl version
	I1108 00:13:00.775768   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:00.779578   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:00.821230   50505 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:00.821320   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.872851   50505 ssh_runner.go:195] Run: crio --version
	I1108 00:13:00.920420   50505 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:12:59.560111   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Start
	I1108 00:12:59.560287   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring networks are active...
	I1108 00:12:59.561030   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network default is active
	I1108 00:12:59.561390   50613 main.go:141] libmachine: (embed-certs-253253) Ensuring network mk-embed-certs-253253 is active
	I1108 00:12:59.561717   50613 main.go:141] libmachine: (embed-certs-253253) Getting domain xml...
	I1108 00:12:59.562287   50613 main.go:141] libmachine: (embed-certs-253253) Creating domain...
	I1108 00:13:00.806061   50613 main.go:141] libmachine: (embed-certs-253253) Waiting to get IP...
	I1108 00:13:00.806862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:00.807268   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:00.807340   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:00.807226   51493 retry.go:31] will retry after 261.179966ms: waiting for machine to come up
	I1108 00:13:01.069535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.070048   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.070078   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.069997   51493 retry.go:31] will retry after 302.795302ms: waiting for machine to come up
	I1108 00:13:01.374567   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.375094   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.375119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.375043   51493 retry.go:31] will retry after 303.804523ms: waiting for machine to come up
	I1108 00:13:01.680374   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:01.680698   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:01.680726   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:01.680660   51493 retry.go:31] will retry after 446.122126ms: waiting for machine to come up
	I1108 00:13:00.921979   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetIP
	I1108 00:13:00.924760   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925121   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:13:00.925148   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:13:00.925370   50505 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:00.929750   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:00.941338   50505 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:00.941372   50505 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:00.979343   50505 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:00.979370   50505 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.3 registry.k8s.io/kube-controller-manager:v1.28.3 registry.k8s.io/kube-scheduler:v1.28.3 registry.k8s.io/kube-proxy:v1.28.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:13:00.979489   50505 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.979539   50505 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.979636   50505 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:00.979477   50505 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.979465   50505 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.979515   50505 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.979516   50505 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980609   50505 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:00.980677   50505 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:00.980704   50505 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1108 00:13:00.980645   50505 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:00.980733   50505 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:00.980949   50505 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:00.980994   50505 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.126154   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.131334   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.141929   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.150051   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.178472   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.198519   50505 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.3" does not exist at hash "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076" in container runtime
	I1108 00:13:01.198569   50505 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.198628   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.214419   50505 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.3" does not exist at hash "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3" in container runtime
	I1108 00:13:01.214470   50505 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.214527   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249270   50505 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.3" does not exist at hash "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4" in container runtime
	I1108 00:13:01.249316   50505 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.249321   50505 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.3" needs transfer: "registry.k8s.io/kube-proxy:v1.28.3" does not exist at hash "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf" in container runtime
	I1108 00:13:01.249354   50505 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.249363   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.249398   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.257971   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1108 00:13:01.268557   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.279207   50505 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I1108 00:13:01.279254   50505 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.279255   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.3
	I1108 00:13:01.279295   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.279304   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.3
	I1108 00:13:01.279365   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.3
	I1108 00:13:01.279492   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.3
	I1108 00:13:01.477649   50505 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I1108 00:13:01.477691   50505 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.477740   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:01.477782   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:01.477888   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1108 00:13:01.477963   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1108 00:13:01.478038   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1108 00:13:01.478005   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I1108 00:13:01.478079   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.478116   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:01.478121   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:01.489810   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I1108 00:13:01.490983   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.3 (exists)
	I1108 00:13:01.491011   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.491049   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3
	I1108 00:13:01.490984   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.3 (exists)
	I1108 00:13:01.556911   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1108 00:13:01.556996   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.3 (exists)
	I1108 00:13:01.557036   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:01.557048   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.3 (exists)
	I1108 00:13:01.576123   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1108 00:13:01.576251   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:02.001052   50505 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:02.127888   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.128302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.128333   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.128247   51493 retry.go:31] will retry after 498.0349ms: waiting for machine to come up
	I1108 00:13:02.627872   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:02.628339   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:02.628373   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:02.628296   51493 retry.go:31] will retry after 852.947554ms: waiting for machine to come up
	I1108 00:13:03.483507   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:03.484074   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:03.484119   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:03.484024   51493 retry.go:31] will retry after 1.040831469s: waiting for machine to come up
	I1108 00:13:04.526186   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:04.526503   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:04.526535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:04.526446   51493 retry.go:31] will retry after 960.701342ms: waiting for machine to come up
	I1108 00:13:05.489041   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:05.489473   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:05.489509   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:05.489456   51493 retry.go:31] will retry after 1.729813733s: waiting for machine to come up
	I1108 00:13:04.536381   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.3: (3.045307892s)
	I1108 00:13:04.536412   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 from cache
	I1108 00:13:04.536439   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536453   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1: (2.979392017s)
	I1108 00:13:04.536485   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I1108 00:13:04.536491   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3
	I1108 00:13:04.536531   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0: (2.960264305s)
	I1108 00:13:04.536549   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I1108 00:13:04.536590   50505 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.535505624s)
	I1108 00:13:04.536622   50505 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1108 00:13:04.536652   50505 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:04.536694   50505 ssh_runner.go:195] Run: which crictl
	I1108 00:13:07.220832   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.3: (2.68430655s)
	I1108 00:13:07.220863   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 from cache
	I1108 00:13:07.220898   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.220902   50505 ssh_runner.go:235] Completed: which crictl: (2.684187653s)
	I1108 00:13:07.220982   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3
	I1108 00:13:07.221015   50505 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:13:08.593275   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.3: (1.372272111s)
	I1108 00:13:08.593311   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 from cache
	I1108 00:13:08.593326   50505 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.372286228s)
	I1108 00:13:08.593374   50505 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 00:13:08.593338   50505 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:08.593474   50505 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:08.593479   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3
	I1108 00:13:07.221541   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:07.221969   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:07.221998   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:07.221953   51493 retry.go:31] will retry after 1.97898588s: waiting for machine to come up
	I1108 00:13:09.202332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:09.202803   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:09.202831   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:09.202756   51493 retry.go:31] will retry after 2.565503631s: waiting for machine to come up
	I1108 00:13:11.769857   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:11.770332   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:11.770354   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:11.770292   51493 retry.go:31] will retry after 3.236419831s: waiting for machine to come up
	I1108 00:13:10.382696   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.3: (1.789194848s)
	I1108 00:13:10.382726   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 from cache
	I1108 00:13:10.382747   50505 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.789249445s)
	I1108 00:13:10.382776   50505 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1108 00:13:10.382752   50505 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:10.382828   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I1108 00:13:11.846184   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.463326325s)
	I1108 00:13:11.846222   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I1108 00:13:11.846254   50505 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:11.846322   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I1108 00:13:15.008441   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:15.008899   50613 main.go:141] libmachine: (embed-certs-253253) DBG | unable to find current IP address of domain embed-certs-253253 in network mk-embed-certs-253253
	I1108 00:13:15.008936   50613 main.go:141] libmachine: (embed-certs-253253) DBG | I1108 00:13:15.008860   51493 retry.go:31] will retry after 3.079379099s: waiting for machine to come up
	I1108 00:13:19.138865   50505 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.292505697s)
	I1108 00:13:19.138899   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I1108 00:13:19.138926   50505 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.138987   50505 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1108 00:13:19.465800   51228 start.go:369] acquired machines lock for "default-k8s-diff-port-039263" in 1m18.442604828s
	I1108 00:13:19.465853   51228 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:19.465863   51228 fix.go:54] fixHost starting: 
	I1108 00:13:19.466321   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:19.466373   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:19.485614   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32967
	I1108 00:13:19.486012   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:19.486457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:13:19.486478   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:19.486839   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:19.487016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:19.487158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:13:19.488697   51228 fix.go:102] recreateIfNeeded on default-k8s-diff-port-039263: state=Stopped err=<nil>
	I1108 00:13:19.488733   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	W1108 00:13:19.488889   51228 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:19.490913   51228 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-039263" ...
	I1108 00:13:19.492333   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Start
	I1108 00:13:19.492481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring networks are active...
	I1108 00:13:19.493162   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network default is active
	I1108 00:13:19.493592   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Ensuring network mk-default-k8s-diff-port-039263 is active
	I1108 00:13:19.494016   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Getting domain xml...
	I1108 00:13:19.494668   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Creating domain...
	I1108 00:13:20.910918   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting to get IP...
	I1108 00:13:20.911948   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912423   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:20.912517   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:20.912403   51635 retry.go:31] will retry after 265.914494ms: waiting for machine to come up
	I1108 00:13:18.092086   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092516   50613 main.go:141] libmachine: (embed-certs-253253) Found IP for machine: 192.168.39.159
	I1108 00:13:18.092544   50613 main.go:141] libmachine: (embed-certs-253253) Reserving static IP address...
	I1108 00:13:18.092568   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has current primary IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.092947   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.092980   50613 main.go:141] libmachine: (embed-certs-253253) DBG | skip adding static IP to network mk-embed-certs-253253 - found existing host DHCP lease matching {name: "embed-certs-253253", mac: "52:54:00:1a:6e:cb", ip: "192.168.39.159"}
	I1108 00:13:18.092999   50613 main.go:141] libmachine: (embed-certs-253253) Reserved static IP address: 192.168.39.159
	I1108 00:13:18.093019   50613 main.go:141] libmachine: (embed-certs-253253) Waiting for SSH to be available...
	I1108 00:13:18.093036   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Getting to WaitForSSH function...
	I1108 00:13:18.094941   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.095311   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.095472   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH client type: external
	I1108 00:13:18.095487   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa (-rw-------)
	I1108 00:13:18.095519   50613 main.go:141] libmachine: (embed-certs-253253) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:18.095535   50613 main.go:141] libmachine: (embed-certs-253253) DBG | About to run SSH command:
	I1108 00:13:18.095545   50613 main.go:141] libmachine: (embed-certs-253253) DBG | exit 0
	I1108 00:13:18.184364   50613 main.go:141] libmachine: (embed-certs-253253) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:18.184700   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetConfigRaw
	I1108 00:13:18.264914   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.267404   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267716   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.267752   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.267951   50613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/config.json ...
	I1108 00:13:18.268153   50613 machine.go:88] provisioning docker machine ...
	I1108 00:13:18.268171   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:18.268382   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268642   50613 buildroot.go:166] provisioning hostname "embed-certs-253253"
	I1108 00:13:18.268662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.268783   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.270977   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271275   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.271302   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.271485   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.271683   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.271873   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.272021   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.272185   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.272549   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.272564   50613 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-253253 && echo "embed-certs-253253" | sudo tee /etc/hostname
	I1108 00:13:18.408618   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253253
	
	I1108 00:13:18.408655   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.411325   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411629   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.411673   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.411793   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.412024   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412204   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.412353   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.412513   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.412864   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.412884   50613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-253253' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-253253/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-253253' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:18.537585   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:18.537611   50613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:18.537628   50613 buildroot.go:174] setting up certificates
	I1108 00:13:18.537636   50613 provision.go:83] configureAuth start
	I1108 00:13:18.537644   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetMachineName
	I1108 00:13:18.537930   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:18.540544   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.540937   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.540966   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.541078   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.543184   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543455   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.543486   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.543559   50613 provision.go:138] copyHostCerts
	I1108 00:13:18.543621   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:18.543639   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:18.543692   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:18.543793   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:18.543801   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:18.543823   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:18.543876   50613 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:18.543884   50613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:18.543900   50613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:18.543962   50613 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.embed-certs-253253 san=[192.168.39.159 192.168.39.159 localhost 127.0.0.1 minikube embed-certs-253253]
	I1108 00:13:18.707824   50613 provision.go:172] copyRemoteCerts
	I1108 00:13:18.707880   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:18.707905   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.710820   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711181   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.711208   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.711437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.711642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.711815   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.711973   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:18.803200   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:18.827267   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 00:13:18.850710   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:18.876752   50613 provision.go:86] duration metric: configureAuth took 339.103407ms
	I1108 00:13:18.876781   50613 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:18.876987   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:18.877075   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:18.879751   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880121   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:18.880149   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:18.880331   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:18.880501   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880649   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:18.880772   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:18.880929   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:18.881240   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:18.881257   50613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:19.199987   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:19.200012   50613 machine.go:91] provisioned docker machine in 931.846262ms
	I1108 00:13:19.200023   50613 start.go:300] post-start starting for "embed-certs-253253" (driver="kvm2")
	I1108 00:13:19.200035   50613 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:19.200057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.200377   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:19.200409   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.203230   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203610   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.203644   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.203771   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.203963   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.204118   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.204231   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.297991   50613 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:19.303630   50613 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:19.303655   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:19.303721   50613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:19.303831   50613 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:19.303956   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:19.315605   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:19.339647   50613 start.go:303] post-start completed in 139.611237ms
	I1108 00:13:19.339665   50613 fix.go:56] fixHost completed within 19.805611247s
	I1108 00:13:19.339687   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.342291   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342623   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.342648   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.342838   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.343019   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343147   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.343323   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.343483   50613 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:19.343856   50613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1108 00:13:19.343868   50613 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:19.465645   50613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402399.415738784
	
	I1108 00:13:19.465670   50613 fix.go:206] guest clock: 1699402399.415738784
	I1108 00:13:19.465681   50613 fix.go:219] Guest: 2023-11-08 00:13:19.415738784 +0000 UTC Remote: 2023-11-08 00:13:19.339668655 +0000 UTC m=+237.442917453 (delta=76.070129ms)
	I1108 00:13:19.465704   50613 fix.go:190] guest clock delta is within tolerance: 76.070129ms
	I1108 00:13:19.465710   50613 start.go:83] releasing machines lock for "embed-certs-253253", held for 19.931686858s
	I1108 00:13:19.465738   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.465996   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:19.468862   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469185   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.469223   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.469365   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.469898   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470091   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:13:19.470174   50613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:19.470215   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.470300   50613 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:19.470321   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:13:19.473140   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473285   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473517   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473562   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473594   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:19.473612   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:19.473662   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473777   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:13:19.473843   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474004   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474007   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:13:19.474153   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.474192   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:13:19.474344   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:13:19.565638   50613 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:19.591686   50613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:19.747192   50613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:19.755053   50613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:19.755134   50613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:19.774522   50613 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:19.774551   50613 start.go:472] detecting cgroup driver to use...
	I1108 00:13:19.774652   50613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:19.795492   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:19.809888   50613 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:19.809958   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:19.823108   50613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:19.835588   50613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:19.940017   50613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:20.075405   50613 docker.go:219] disabling docker service ...
	I1108 00:13:20.075460   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:20.090949   50613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:20.103551   50613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:20.226887   50613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:20.352088   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:20.367626   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:20.388084   50613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:20.388153   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.398506   50613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:20.398573   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.408335   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.417991   50613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:20.427599   50613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:20.439537   50613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:20.450914   50613 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:20.450972   50613 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:20.464456   50613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:20.475133   50613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:20.586162   50613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:20.799540   50613 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:20.799615   50613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:20.808503   50613 start.go:540] Will wait 60s for crictl version
	I1108 00:13:20.808551   50613 ssh_runner.go:195] Run: which crictl
	I1108 00:13:20.812371   50613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:20.853073   50613 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:20.853166   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.904737   50613 ssh_runner.go:195] Run: crio --version
	I1108 00:13:20.958281   50613 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:20.959792   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetIP
	I1108 00:13:20.962399   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.962740   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:13:20.962775   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:13:20.963037   50613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:20.967403   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.980199   50613 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:20.980261   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:21.024679   50613 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:21.024757   50613 ssh_runner.go:195] Run: which lz4
	I1108 00:13:21.028861   50613 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:21.032736   50613 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:21.032762   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:19.898602   50505 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1108 00:13:19.898655   50505 cache_images.go:123] Successfully loaded all cached images
	I1108 00:13:19.898663   50505 cache_images.go:92] LoadImages completed in 18.919280882s
	I1108 00:13:19.898742   50505 ssh_runner.go:195] Run: crio config
	I1108 00:13:19.970909   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:19.970936   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:19.970958   50505 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:19.970986   50505 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.176 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320390 NodeName:no-preload-320390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:19.971171   50505 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320390"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:19.971273   50505 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-320390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:19.971347   50505 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:19.984469   50505 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:19.984551   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:19.995491   50505 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1108 00:13:20.013609   50505 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:20.031507   50505 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1108 00:13:20.051978   50505 ssh_runner.go:195] Run: grep 192.168.61.176	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:20.057139   50505 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:20.071438   50505 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390 for IP: 192.168.61.176
	I1108 00:13:20.071471   50505 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:20.071635   50505 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:20.071691   50505 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:20.071782   50505 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.key
	I1108 00:13:20.071848   50505 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key.492ad1cf
	I1108 00:13:20.071899   50505 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key
	I1108 00:13:20.072026   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:20.072064   50505 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:20.072080   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:20.072130   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:20.072167   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:20.072205   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:20.072260   50505 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:20.073092   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:20.099422   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:20.126257   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:20.153126   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:20.184849   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:20.215515   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:20.247686   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:20.277712   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:20.304438   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:20.330321   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:20.361411   50505 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:20.390456   50505 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:20.410634   50505 ssh_runner.go:195] Run: openssl version
	I1108 00:13:20.418597   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:20.431853   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438127   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.438271   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:20.445644   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:20.456959   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:20.466413   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472311   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.472365   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:20.477965   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:20.487454   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:20.496731   50505 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502531   50505 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.502591   50505 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:20.509683   50505 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:20.520960   50505 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:20.525545   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:20.531367   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:20.537422   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:20.543607   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:20.548942   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:20.554419   50505 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:20.559633   50505 kubeadm.go:404] StartCluster: {Name:no-preload-320390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-320390 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:20.559719   50505 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:20.559766   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:20.603718   50505 cri.go:89] found id: ""
	I1108 00:13:20.603795   50505 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:20.613389   50505 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:20.613418   50505 kubeadm.go:636] restartCluster start
	I1108 00:13:20.613476   50505 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:20.622276   50505 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.623645   50505 kubeconfig.go:92] found "no-preload-320390" server: "https://192.168.61.176:8443"
	I1108 00:13:20.626874   50505 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:20.638188   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.638238   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.649536   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:20.649553   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:20.649610   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:20.660145   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.160858   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.160936   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.174163   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.660441   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:21.660526   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:21.675795   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.160281   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.160358   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.175777   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:22.660249   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:22.660328   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:22.675747   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.160280   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.160360   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.174686   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:23.661260   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:23.661343   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:23.675936   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.160440   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.160558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.174501   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:21.180066   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180534   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.180563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.180492   51635 retry.go:31] will retry after 320.996627ms: waiting for machine to come up
	I1108 00:13:21.503202   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.503750   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.503689   51635 retry.go:31] will retry after 431.944242ms: waiting for machine to come up
	I1108 00:13:21.937564   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938025   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:21.938054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:21.937972   51635 retry.go:31] will retry after 592.354358ms: waiting for machine to come up
	I1108 00:13:22.531850   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:22.532364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:22.532272   51635 retry.go:31] will retry after 589.753727ms: waiting for machine to come up
	I1108 00:13:23.124275   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124784   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.124825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.124746   51635 retry.go:31] will retry after 596.910282ms: waiting for machine to come up
	I1108 00:13:23.722967   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723389   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:23.723419   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:23.723349   51635 retry.go:31] will retry after 793.320391ms: waiting for machine to come up
	I1108 00:13:24.518525   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518953   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:24.518985   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:24.518914   51635 retry.go:31] will retry after 1.247294281s: waiting for machine to come up
	I1108 00:13:25.768137   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:25.768634   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:25.768541   51635 retry.go:31] will retry after 1.468389149s: waiting for machine to come up
	I1108 00:13:22.802292   50613 crio.go:444] Took 1.773480 seconds to copy over tarball
	I1108 00:13:22.802374   50613 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:25.811996   50613 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009592787s)
	I1108 00:13:25.812027   50613 crio.go:451] Took 3.009706 seconds to extract the tarball
	I1108 00:13:25.812036   50613 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:25.852011   50613 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:25.903032   50613 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:25.903055   50613 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:25.903160   50613 ssh_runner.go:195] Run: crio config
	I1108 00:13:25.964562   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:25.964585   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:25.964601   50613 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:25.964618   50613 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-253253 NodeName:embed-certs-253253 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:25.964768   50613 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-253253"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:25.964869   50613 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-253253 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:13:25.964931   50613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:25.973956   50613 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:25.974031   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:25.982070   50613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 00:13:26.001066   50613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:26.020258   50613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1108 00:13:26.039418   50613 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:26.043133   50613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:26.055865   50613 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253 for IP: 192.168.39.159
	I1108 00:13:26.055902   50613 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:26.056069   50613 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:26.056268   50613 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:26.056374   50613 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/client.key
	I1108 00:13:26.128533   50613 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key.b15c5797
	I1108 00:13:26.128666   50613 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key
	I1108 00:13:26.128842   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:26.128884   50613 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:26.128895   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:26.128930   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:26.128953   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:26.128975   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:26.129016   50613 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:26.129621   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:26.153776   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:26.179006   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:26.202199   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/embed-certs-253253/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:26.225241   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:26.247745   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:26.270546   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:26.297075   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:26.320835   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:26.344068   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:26.367085   50613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:26.391491   50613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:26.408055   50613 ssh_runner.go:195] Run: openssl version
	I1108 00:13:26.413824   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:26.423666   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428281   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.428332   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:26.433901   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:26.443832   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:26.453722   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458290   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.458341   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:26.464035   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:26.473908   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:26.483600   50613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488053   50613 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.488113   50613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:26.493571   50613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:26.503466   50613 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:26.508047   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:26.514165   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:26.520278   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:26.526421   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:26.532388   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:26.538323   50613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:26.544215   50613 kubeadm.go:404] StartCluster: {Name:embed-certs-253253 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-253253 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:26.544287   50613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:26.544330   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:26.586501   50613 cri.go:89] found id: ""
	I1108 00:13:26.586578   50613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:26.596647   50613 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:26.596676   50613 kubeadm.go:636] restartCluster start
	I1108 00:13:26.596734   50613 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:26.605901   50613 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.607305   50613 kubeconfig.go:92] found "embed-certs-253253" server: "https://192.168.39.159:8443"
	I1108 00:13:26.610434   50613 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:26.619238   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.619291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.630724   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.630746   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.630787   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.641997   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:24.660263   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:24.660349   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:24.675197   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.160678   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.160774   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.172593   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:25.660613   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:25.660696   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:25.672242   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.160884   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.160978   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.174734   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:26.660269   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:26.660337   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:26.671721   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.160250   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.160344   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.171104   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.660667   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.660729   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.671899   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.160408   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.160471   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.170733   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.660264   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.660338   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.671482   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.161084   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.161163   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.172174   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.238049   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238487   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:27.238518   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:27.238428   51635 retry.go:31] will retry after 1.602246301s: waiting for machine to come up
	I1108 00:13:28.842785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843235   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:28.843259   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:28.843188   51635 retry.go:31] will retry after 2.218327688s: waiting for machine to come up
	I1108 00:13:27.142567   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.242647   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.256767   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:27.642212   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:27.642306   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:27.654185   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.142751   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.142832   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.154141   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:28.642738   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:28.642817   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:28.654476   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.143085   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.143168   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.154553   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.642422   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.642499   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.658048   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.142497   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.142568   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.153710   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.642216   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.642291   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.658036   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.142547   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.142634   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.159124   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:31.642720   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:31.642810   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:31.654593   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:29.660882   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:29.660944   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:29.675528   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.161058   50505 api_server.go:166] Checking apiserver status ...
	I1108 00:13:30.161121   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:30.171493   50505 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:30.638722   50505 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:30.638762   50505 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:30.638776   50505 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:30.638825   50505 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:30.677982   50505 cri.go:89] found id: ""
	I1108 00:13:30.678064   50505 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:30.693650   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:30.702679   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:30.702757   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711179   50505 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:30.711212   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:30.843638   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:31.970868   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.127188218s)
	I1108 00:13:31.970904   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.167903   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.242076   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:32.324914   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:32.325001   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.342576   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:32.861296   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.360958   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:33.861308   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:31.062973   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:31.063465   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:31.063370   51635 retry.go:31] will retry after 2.935881965s: waiting for machine to come up
	I1108 00:13:34.002009   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002456   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:34.002481   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:34.002385   51635 retry.go:31] will retry after 2.918632194s: waiting for machine to come up
	I1108 00:13:32.142573   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.142668   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.156513   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:32.643129   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:32.643203   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:32.654790   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.143023   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.143114   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.159475   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:33.642631   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:33.642728   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:33.658632   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.142142   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.142218   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.158375   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:34.642356   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:34.642437   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:34.657692   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.142180   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.142276   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.157616   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:35.642121   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:35.642194   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:35.656642   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.142162   50613 api_server.go:166] Checking apiserver status ...
	I1108 00:13:36.142270   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:36.153340   50613 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:36.619909   50613 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:13:36.619941   50613 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:13:36.619958   50613 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:13:36.620035   50613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:36.656935   50613 cri.go:89] found id: ""
	I1108 00:13:36.657008   50613 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:13:36.671784   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:13:36.680073   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:13:36.680120   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688560   50613 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:13:36.688575   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:36.802484   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:34.361558   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.860720   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:34.881793   50505 api_server.go:72] duration metric: took 2.55688905s to wait for apiserver process to appear ...
	I1108 00:13:34.881823   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:34.881843   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.396447   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.396488   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.396503   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.471135   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:38.471165   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:38.971845   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:38.977126   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:38.977163   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.472030   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.477778   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:39.477810   50505 api_server.go:103] status: https://192.168.61.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:39.971333   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:13:39.977224   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:13:39.987415   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:39.987446   50505 api_server.go:131] duration metric: took 5.10561478s to wait for apiserver health ...
	I1108 00:13:39.987456   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:13:39.987465   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:39.989270   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:36.922427   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922874   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | unable to find current IP address of domain default-k8s-diff-port-039263 in network mk-default-k8s-diff-port-039263
	I1108 00:13:36.922916   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | I1108 00:13:36.922824   51635 retry.go:31] will retry after 3.960656744s: waiting for machine to come up
	I1108 00:13:40.886022   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886563   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Found IP for machine: 192.168.72.116
	I1108 00:13:40.886591   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has current primary IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.886601   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserving static IP address...
	I1108 00:13:40.886974   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.887012   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | skip adding static IP to network mk-default-k8s-diff-port-039263 - found existing host DHCP lease matching {name: "default-k8s-diff-port-039263", mac: "52:54:00:aa:72:05", ip: "192.168.72.116"}
	I1108 00:13:40.887037   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Getting to WaitForSSH function...
	I1108 00:13:40.887058   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Reserved static IP address: 192.168.72.116
	I1108 00:13:40.887072   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Waiting for SSH to be available...
	I1108 00:13:40.889373   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889771   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:40.889803   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:40.889991   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH client type: external
	I1108 00:13:40.890018   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa (-rw-------)
	I1108 00:13:40.890054   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:13:40.890068   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | About to run SSH command:
	I1108 00:13:40.890082   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | exit 0
	I1108 00:13:37.573684   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.781978   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.863424   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:37.935306   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:13:37.935377   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:37.947059   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.458806   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:38.959076   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.459045   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:39.959244   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.458249   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:13:40.480623   50613 api_server.go:72] duration metric: took 2.545315304s to wait for apiserver process to appear ...
	I1108 00:13:40.480650   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:13:40.480668   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:42.285976   50022 start.go:369] acquired machines lock for "old-k8s-version-590541" in 56.809842177s
	I1108 00:13:42.286028   50022 start.go:96] Skipping create...Using existing machine configuration
	I1108 00:13:42.286039   50022 fix.go:54] fixHost starting: 
	I1108 00:13:42.286455   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:13:42.286492   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:13:42.305869   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I1108 00:13:42.306363   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:13:42.306845   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:13:42.306871   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:13:42.307221   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:13:42.307548   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:13:42.307740   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:13:42.309513   50022 fix.go:102] recreateIfNeeded on old-k8s-version-590541: state=Stopped err=<nil>
	I1108 00:13:42.309539   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	W1108 00:13:42.309706   50022 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 00:13:42.311819   50022 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-590541" ...
	I1108 00:13:40.997357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | SSH cmd err, output: <nil>: 
	I1108 00:13:40.997688   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetConfigRaw
	I1108 00:13:40.998394   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.001148   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001578   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.001612   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.001940   51228 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/config.json ...
	I1108 00:13:41.002174   51228 machine.go:88] provisioning docker machine ...
	I1108 00:13:41.002197   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:41.002421   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002577   51228 buildroot.go:166] provisioning hostname "default-k8s-diff-port-039263"
	I1108 00:13:41.002600   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.002800   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.005167   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005544   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.005584   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.005873   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.006029   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.006291   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.006425   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.007012   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.007036   51228 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-039263 && echo "default-k8s-diff-port-039263" | sudo tee /etc/hostname
	I1108 00:13:41.168664   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-039263
	
	I1108 00:13:41.168698   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.171709   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172090   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.172132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.172266   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.172457   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172650   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.172867   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.173130   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.173626   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.173654   51228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-039263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-039263/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-039263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:13:41.324510   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:13:41.324539   51228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:13:41.324586   51228 buildroot.go:174] setting up certificates
	I1108 00:13:41.324598   51228 provision.go:83] configureAuth start
	I1108 00:13:41.324610   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetMachineName
	I1108 00:13:41.324933   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:41.327797   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328176   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.328213   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.328321   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.330558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.330921   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.330955   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.331062   51228 provision.go:138] copyHostCerts
	I1108 00:13:41.331128   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:13:41.331150   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:13:41.331222   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:13:41.331337   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:13:41.331355   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:13:41.331387   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:13:41.331467   51228 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:13:41.331479   51228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:13:41.331506   51228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:13:41.331592   51228 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-039263 san=[192.168.72.116 192.168.72.116 localhost 127.0.0.1 minikube default-k8s-diff-port-039263]
	I1108 00:13:41.452051   51228 provision.go:172] copyRemoteCerts
	I1108 00:13:41.452123   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:13:41.452156   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.454755   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455056   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.455089   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.455288   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.455512   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.455704   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.455831   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:41.554387   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 00:13:41.586357   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:13:41.616703   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1108 00:13:41.646461   51228 provision.go:86] duration metric: configureAuth took 321.850044ms
	I1108 00:13:41.646489   51228 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:13:41.646730   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:13:41.646825   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:41.650386   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.650813   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:41.650856   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:41.651031   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:41.651232   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651422   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:41.651598   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:41.651763   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:41.652302   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:41.652325   51228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:13:42.006373   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:13:42.006401   51228 machine.go:91] provisioned docker machine in 1.004212938s
	I1108 00:13:42.006414   51228 start.go:300] post-start starting for "default-k8s-diff-port-039263" (driver="kvm2")
	I1108 00:13:42.006426   51228 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:13:42.006445   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.006785   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:13:42.006811   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.009619   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.009950   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.009986   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.010127   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.010344   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.010515   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.010673   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.106366   51228 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:13:42.110676   51228 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:13:42.110701   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:13:42.110770   51228 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:13:42.110869   51228 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:13:42.110972   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:13:42.121223   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:42.146966   51228 start.go:303] post-start completed in 140.536976ms
	I1108 00:13:42.146996   51228 fix.go:56] fixHost completed within 22.681133015s
	I1108 00:13:42.147019   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.149705   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150132   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.150165   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.150406   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.150606   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150818   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.150988   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.151156   51228 main.go:141] libmachine: Using SSH client type: native
	I1108 00:13:42.151511   51228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.72.116 22 <nil> <nil>}
	I1108 00:13:42.151523   51228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:13:42.285789   51228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402422.233004693
	
	I1108 00:13:42.285815   51228 fix.go:206] guest clock: 1699402422.233004693
	I1108 00:13:42.285823   51228 fix.go:219] Guest: 2023-11-08 00:13:42.233004693 +0000 UTC Remote: 2023-11-08 00:13:42.146999966 +0000 UTC m=+101.273648910 (delta=86.004727ms)
	I1108 00:13:42.285869   51228 fix.go:190] guest clock delta is within tolerance: 86.004727ms
	I1108 00:13:42.285877   51228 start.go:83] releasing machines lock for "default-k8s-diff-port-039263", held for 22.820045752s
	I1108 00:13:42.285913   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.286161   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:42.288711   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289095   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.289133   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.289241   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.289864   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290109   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:13:42.290209   51228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:13:42.290261   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.290323   51228 ssh_runner.go:195] Run: cat /version.json
	I1108 00:13:42.290345   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:13:42.293063   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293219   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293451   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293483   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293570   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:42.293599   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:42.293721   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.293878   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.293887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:13:42.294075   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:13:42.294085   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294234   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:13:42.294280   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.294336   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:13:42.386493   51228 ssh_runner.go:195] Run: systemctl --version
	I1108 00:13:42.411009   51228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:13:42.558200   51228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:13:42.566040   51228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:13:42.566116   51228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:13:42.584775   51228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:13:42.584800   51228 start.go:472] detecting cgroup driver to use...
	I1108 00:13:42.584872   51228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:13:42.598720   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:13:42.612836   51228 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:13:42.612927   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:13:42.627474   51228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:13:42.641670   51228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:13:42.753616   51228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:13:42.888608   51228 docker.go:219] disabling docker service ...
	I1108 00:13:42.888680   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:13:42.903298   51228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:13:42.920184   51228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:13:43.054621   51228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:13:43.181836   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:13:43.198481   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:13:43.219759   51228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1108 00:13:43.219827   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.231137   51228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:13:43.231221   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.242206   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.253506   51228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:13:43.264311   51228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:13:43.276451   51228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:13:43.288448   51228 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:13:43.288522   51228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:13:43.305986   51228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:13:43.318366   51228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:13:43.479739   51228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:13:43.705223   51228 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:13:43.705302   51228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:13:43.711842   51228 start.go:540] Will wait 60s for crictl version
	I1108 00:13:43.711915   51228 ssh_runner.go:195] Run: which crictl
	I1108 00:13:43.717688   51228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:13:43.762492   51228 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:13:43.762651   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.814548   51228 ssh_runner.go:195] Run: crio --version
	I1108 00:13:43.870144   51228 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1108 00:13:39.990811   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:40.020162   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:40.064758   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:40.081652   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:13:40.081705   50505 system_pods.go:61] "coredns-5dd5756b68-lhnz5" [936252ee-4f00-49e2-96e4-7c4f4a4ca378] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:40.081725   50505 system_pods.go:61] "etcd-no-preload-320390" [95e08672-dc80-4aa6-bd4a-e5f77bfc4b51] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:40.081738   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [3261561e-b7d5-4302-8e0b-301d00407e8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:40.081748   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [b87602fd-b248-4529-9116-1851a4284bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:40.081763   50505 system_pods.go:61] "kube-proxy-c4mbm" [33806b69-57c0-4807-849b-b6a4f8a5db12] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:40.081777   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [4f7b4160-b99e-4f76-9b12-c5b1849c91b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:40.081791   50505 system_pods.go:61] "metrics-server-57f55c9bc5-th89c" [06aea7c0-065b-44a4-8d53-432f5722e937] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:40.081810   50505 system_pods.go:61] "storage-provisioner" [c7b0810b-1ba7-4d56-ad97-3f04d771960d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:40.081823   50505 system_pods.go:74] duration metric: took 17.024016ms to wait for pod list to return data ...
	I1108 00:13:40.081836   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:40.093789   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:40.093827   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:40.093841   50505 node_conditions.go:105] duration metric: took 11.998569ms to run NodePressure ...
	I1108 00:13:40.093863   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:40.340962   50505 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346004   50505 kubeadm.go:787] kubelet initialised
	I1108 00:13:40.346032   50505 kubeadm.go:788] duration metric: took 5.042344ms waiting for restarted kubelet to initialise ...
	I1108 00:13:40.346044   50505 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:40.355648   50505 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:42.377985   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:42.313355   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Start
	I1108 00:13:42.313526   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring networks are active...
	I1108 00:13:42.314176   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network default is active
	I1108 00:13:42.314638   50022 main.go:141] libmachine: (old-k8s-version-590541) Ensuring network mk-old-k8s-version-590541 is active
	I1108 00:13:42.315060   50022 main.go:141] libmachine: (old-k8s-version-590541) Getting domain xml...
	I1108 00:13:42.315833   50022 main.go:141] libmachine: (old-k8s-version-590541) Creating domain...
	I1108 00:13:43.739499   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting to get IP...
	I1108 00:13:43.740647   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.741195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.741259   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.741155   51822 retry.go:31] will retry after 195.621332ms: waiting for machine to come up
	I1108 00:13:43.938557   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:43.939127   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:43.939268   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:43.939200   51822 retry.go:31] will retry after 278.651736ms: waiting for machine to come up
	I1108 00:13:44.219831   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.220473   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.220500   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.220418   51822 retry.go:31] will retry after 384.748872ms: waiting for machine to come up
	I1108 00:13:44.607110   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:44.607665   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:44.607696   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:44.607591   51822 retry.go:31] will retry after 401.60668ms: waiting for machine to come up
	I1108 00:13:43.871596   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetIP
	I1108 00:13:43.874814   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875307   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:13:43.875357   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:13:43.875575   51228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1108 00:13:43.880324   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:43.895271   51228 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1108 00:13:43.895331   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:43.943120   51228 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1108 00:13:43.943238   51228 ssh_runner.go:195] Run: which lz4
	I1108 00:13:43.947723   51228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:13:43.952328   51228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:13:43.952365   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I1108 00:13:45.857547   51228 crio.go:444] Took 1.909852 seconds to copy over tarball
	I1108 00:13:45.857623   51228 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:13:45.314087   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.314125   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.314144   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.333352   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:13:45.333384   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:13:45.833959   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:45.852530   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:45.852613   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.333996   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.346680   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:13:46.346714   50613 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:13:46.833955   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:13:46.841287   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:13:46.853271   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:13:46.853299   50613 api_server.go:131] duration metric: took 6.372641273s to wait for apiserver health ...
	I1108 00:13:46.853310   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:13:46.853318   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:46.855336   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:13:46.856955   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:13:46.892049   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:13:46.933039   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:13:44.399678   50505 pod_ready.go:102] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.879110   50505 pod_ready.go:92] pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.879142   50505 pod_ready.go:81] duration metric: took 5.523463579s waiting for pod "coredns-5dd5756b68-lhnz5" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.879154   50505 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885356   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:45.885377   50505 pod_ready.go:81] duration metric: took 6.21581ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:45.885385   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:47.914308   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:45.011074   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.011525   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.011560   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.011500   51822 retry.go:31] will retry after 708.154492ms: waiting for machine to come up
	I1108 00:13:45.720911   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:45.721383   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:45.721418   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:45.721294   51822 retry.go:31] will retry after 746.365542ms: waiting for machine to come up
	I1108 00:13:46.469031   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:46.469615   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:46.469641   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:46.469556   51822 retry.go:31] will retry after 924.305758ms: waiting for machine to come up
	I1108 00:13:47.395756   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:47.396297   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:47.396323   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:47.396241   51822 retry.go:31] will retry after 1.343866256s: waiting for machine to come up
	I1108 00:13:48.741427   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:48.741851   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:48.741883   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:48.741816   51822 retry.go:31] will retry after 1.388849147s: waiting for machine to come up
	I1108 00:13:49.625178   51228 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.76753046s)
	I1108 00:13:49.625229   51228 crio.go:451] Took 3.767633 seconds to extract the tarball
	I1108 00:13:49.625242   51228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:13:49.670263   51228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:13:49.727650   51228 crio.go:496] all images are preloaded for cri-o runtime.
	I1108 00:13:49.727677   51228 cache_images.go:84] Images are preloaded, skipping loading
	I1108 00:13:49.727747   51228 ssh_runner.go:195] Run: crio config
	I1108 00:13:49.811565   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:13:49.811592   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:13:49.811615   51228 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:13:49.811639   51228 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.116 APIServerPort:8444 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-039263 NodeName:default-k8s-diff-port-039263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 00:13:49.811812   51228 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.116
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-039263"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:13:49.811906   51228 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-039263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1108 00:13:49.811984   51228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 00:13:49.822961   51228 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:13:49.823027   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:13:49.832632   51228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1108 00:13:49.850812   51228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:13:49.869345   51228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1108 00:13:49.887645   51228 ssh_runner.go:195] Run: grep 192.168.72.116	control-plane.minikube.internal$ /etc/hosts
	I1108 00:13:49.892538   51228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:13:49.907166   51228 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263 for IP: 192.168.72.116
	I1108 00:13:49.907205   51228 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:13:49.907374   51228 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:13:49.907425   51228 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:13:49.907523   51228 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.key
	I1108 00:13:49.907601   51228 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key.b2cbdf93
	I1108 00:13:49.907658   51228 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key
	I1108 00:13:49.907807   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:13:49.907851   51228 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:13:49.907872   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:13:49.907915   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:13:49.907951   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:13:49.907988   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:13:49.908046   51228 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:13:49.908955   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:13:49.938941   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 00:13:49.964654   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:13:49.991354   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 00:13:50.018895   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:13:50.048330   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:13:50.076095   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:13:50.103752   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:13:50.130140   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:13:50.156862   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:13:50.181994   51228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:13:50.208069   51228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:13:50.226069   51228 ssh_runner.go:195] Run: openssl version
	I1108 00:13:50.232941   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:13:50.246981   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.252981   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.253059   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:13:50.260626   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:13:50.274135   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:13:50.285611   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290761   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.290837   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:13:50.297508   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:13:50.308772   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:13:50.320122   51228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326021   51228 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.326083   51228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:13:50.332534   51228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:13:50.344381   51228 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:13:50.350040   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:13:50.356282   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:13:50.362850   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:13:50.378237   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:13:50.385607   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:13:50.392272   51228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:13:50.399220   51228 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-039263 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port
-039263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:13:50.399304   51228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:13:50.399358   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:13:50.449693   51228 cri.go:89] found id: ""
	I1108 00:13:50.449770   51228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:13:50.460225   51228 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:13:50.460256   51228 kubeadm.go:636] restartCluster start
	I1108 00:13:50.460313   51228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:13:50.469777   51228 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.470973   51228 kubeconfig.go:92] found "default-k8s-diff-port-039263" server: "https://192.168.72.116:8444"
	I1108 00:13:50.473778   51228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:13:50.482964   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.483022   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.495100   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:50.495123   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:50.495186   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:50.508735   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:46.949012   50613 system_pods.go:59] 9 kube-system pods found
	I1108 00:13:46.950252   50613 system_pods.go:61] "coredns-5dd5756b68-7djdr" [a1459bf3-703b-418a-bc22-c98e285c6e31] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950302   50613 system_pods.go:61] "coredns-5dd5756b68-8qjbd" [fa7b05fd-725b-4c9c-815e-360f2bef8ee6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:13:46.950336   50613 system_pods.go:61] "etcd-embed-certs-253253" [2631ed7d-3af4-4848-bbb8-c77038f8a1f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:13:46.950369   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [80b3e8da-6474-4fd8-bb86-0d9cc70086ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:13:46.950391   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [ee19def3-043a-4832-8153-52aaf8b4748a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:13:46.950407   50613 system_pods.go:61] "kube-proxy-rsgkf" [509d66e3-b034-4dcd-a16e-b2f93b9efa6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:13:46.950482   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [ef7bb9c3-98c8-45d8-8f54-852fb639b408] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:13:46.950497   50613 system_pods.go:61] "metrics-server-57f55c9bc5-s7ldx" [61cd423c-edbd-4d0c-87e8-1ac8e52c70e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:13:46.950507   50613 system_pods.go:61] "storage-provisioner" [d6157b7c-6b52-4ca8-a935-d68a0291305f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:13:46.950519   50613 system_pods.go:74] duration metric: took 17.457991ms to wait for pod list to return data ...
	I1108 00:13:46.950532   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:13:46.956062   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:13:46.956142   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:13:46.956165   50613 node_conditions.go:105] duration metric: took 5.622732ms to run NodePressure ...
	I1108 00:13:46.956193   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:47.272695   50613 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280001   50613 kubeadm.go:787] kubelet initialised
	I1108 00:13:47.280031   50613 kubeadm.go:788] duration metric: took 7.30064ms waiting for restarted kubelet to initialise ...
	I1108 00:13:47.280041   50613 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:13:47.290043   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.378703   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:50.370740   50505 pod_ready.go:102] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:51.912802   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.912845   50505 pod_ready.go:81] duration metric: took 6.027451924s waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.912861   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920043   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.920073   50505 pod_ready.go:81] duration metric: took 7.195906ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.920085   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927863   50505 pod_ready.go:92] pod "kube-proxy-c4mbm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.927887   50505 pod_ready.go:81] duration metric: took 7.793258ms waiting for pod "kube-proxy-c4mbm" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.927900   50505 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934444   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:51.934470   50505 pod_ready.go:81] duration metric: took 6.560509ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:51.934481   50505 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:50.131947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:50.132491   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:50.132526   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:50.132397   51822 retry.go:31] will retry after 1.410573405s: waiting for machine to come up
	I1108 00:13:51.544674   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:51.545073   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:51.545099   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:51.545025   51822 retry.go:31] will retry after 1.773802671s: waiting for machine to come up
	I1108 00:13:53.320381   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:53.320863   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:53.320893   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:53.320805   51822 retry.go:31] will retry after 3.166868207s: waiting for machine to come up
	I1108 00:13:51.009734   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.009825   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.026052   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:51.509697   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:51.509786   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:51.527840   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.009557   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.009656   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.025049   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.509606   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:52.509707   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:52.526174   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.008803   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.008954   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.022472   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:53.508900   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:53.509005   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:53.525225   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.009884   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.009974   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.022171   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:54.509280   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:54.509376   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:54.522041   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.009670   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.009752   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.023035   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:55.509640   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:55.509717   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:55.526730   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:52.836317   50613 pod_ready.go:102] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:53.332094   50613 pod_ready.go:92] pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.332121   50613 pod_ready.go:81] duration metric: took 6.042047013s waiting for pod "coredns-5dd5756b68-7djdr" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.332133   50613 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337858   50613 pod_ready.go:92] pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:53.337882   50613 pod_ready.go:81] duration metric: took 5.740229ms waiting for pod "coredns-5dd5756b68-8qjbd" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:53.337894   50613 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:55.356131   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:54.323357   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.328874   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.820773   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:56.490058   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:56.490553   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:56.490590   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:56.490511   51822 retry.go:31] will retry after 3.18441493s: waiting for machine to come up
	I1108 00:13:56.009549   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.009646   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.024559   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:56.508912   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:56.509015   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:56.521861   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.009408   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.009479   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.022156   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:57.509466   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:57.509554   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:57.522766   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.008909   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.009026   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.021521   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:58.509050   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:58.509134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:58.521387   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.008889   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.008975   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.021781   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:13:59.509489   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:13:59.509575   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:13:59.521581   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.009117   51228 api_server.go:166] Checking apiserver status ...
	I1108 00:14:00.009196   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:00.022210   51228 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:00.483934   51228 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:00.483990   51228 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:00.484004   51228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:00.484066   51228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:00.528120   51228 cri.go:89] found id: ""
	I1108 00:14:00.528178   51228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:00.544876   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:00.553827   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:00.553883   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562695   51228 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:00.562721   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:00.676044   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:13:57.856242   50613 pod_ready.go:102] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:58.855444   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.855471   50613 pod_ready.go:81] duration metric: took 5.517568786s waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.855479   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860431   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.860453   50613 pod_ready.go:81] duration metric: took 4.966273ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.860464   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865854   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.865874   50613 pod_ready.go:81] duration metric: took 5.40177ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.865914   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870805   50613 pod_ready.go:92] pod "kube-proxy-rsgkf" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.870826   50613 pod_ready.go:81] duration metric: took 4.898411ms waiting for pod "kube-proxy-rsgkf" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.870835   50613 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958009   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:13:58.958034   50613 pod_ready.go:81] duration metric: took 87.190501ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:13:58.958052   50613 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:01.265674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:00.823696   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:03.322129   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:13:59.678086   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:13:59.678579   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | unable to find current IP address of domain old-k8s-version-590541 in network mk-old-k8s-version-590541
	I1108 00:13:59.678598   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | I1108 00:13:59.678528   51822 retry.go:31] will retry after 4.30352873s: waiting for machine to come up
	I1108 00:14:03.983994   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984437   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has current primary IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.984474   50022 main.go:141] libmachine: (old-k8s-version-590541) Found IP for machine: 192.168.50.49
	I1108 00:14:03.984489   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserving static IP address...
	I1108 00:14:03.984947   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.984981   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | skip adding static IP to network mk-old-k8s-version-590541 - found existing host DHCP lease matching {name: "old-k8s-version-590541", mac: "52:54:00:3c:aa:82", ip: "192.168.50.49"}
	I1108 00:14:03.985000   50022 main.go:141] libmachine: (old-k8s-version-590541) Reserved static IP address: 192.168.50.49
	I1108 00:14:03.985020   50022 main.go:141] libmachine: (old-k8s-version-590541) Waiting for SSH to be available...
	I1108 00:14:03.985034   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Getting to WaitForSSH function...
	I1108 00:14:03.987671   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988083   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:03.988116   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:03.988388   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH client type: external
	I1108 00:14:03.988424   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa (-rw-------)
	I1108 00:14:03.988461   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1108 00:14:03.988481   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | About to run SSH command:
	I1108 00:14:03.988496   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | exit 0
	I1108 00:14:04.080867   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | SSH cmd err, output: <nil>: 
	I1108 00:14:04.081275   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetConfigRaw
	I1108 00:14:04.081955   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.085061   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085512   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.085554   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.085942   50022 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/config.json ...
	I1108 00:14:04.086165   50022 machine.go:88] provisioning docker machine ...
	I1108 00:14:04.086188   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:04.086417   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086612   50022 buildroot.go:166] provisioning hostname "old-k8s-version-590541"
	I1108 00:14:04.086634   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.086822   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.089431   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.089808   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.089838   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.090007   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.090201   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090362   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.090535   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.090686   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.090991   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.091002   50022 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-590541 && echo "old-k8s-version-590541" | sudo tee /etc/hostname
	I1108 00:14:04.228526   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-590541
	
	I1108 00:14:04.228561   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.232020   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232390   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.232454   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.232743   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.232930   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233109   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.233264   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.233430   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.233786   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.233812   50022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-590541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-590541/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-590541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 00:14:04.370396   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 00:14:04.370424   50022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9647/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9647/.minikube}
	I1108 00:14:04.370469   50022 buildroot.go:174] setting up certificates
	I1108 00:14:04.370487   50022 provision.go:83] configureAuth start
	I1108 00:14:04.370505   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetMachineName
	I1108 00:14:04.370779   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:04.373683   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374081   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.374111   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.374240   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.377048   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377441   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.377469   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.377596   50022 provision.go:138] copyHostCerts
	I1108 00:14:04.377658   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem, removing ...
	I1108 00:14:04.377678   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem
	I1108 00:14:04.377748   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/ca.pem (1078 bytes)
	I1108 00:14:04.377855   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem, removing ...
	I1108 00:14:04.377867   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem
	I1108 00:14:04.377893   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/cert.pem (1123 bytes)
	I1108 00:14:04.377965   50022 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem, removing ...
	I1108 00:14:04.377979   50022 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem
	I1108 00:14:04.378005   50022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9647/.minikube/key.pem (1675 bytes)
	I1108 00:14:04.378064   50022 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-590541 san=[192.168.50.49 192.168.50.49 localhost 127.0.0.1 minikube old-k8s-version-590541]
	I1108 00:14:04.534682   50022 provision.go:172] copyRemoteCerts
	I1108 00:14:04.534750   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 00:14:04.534778   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.538002   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538379   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.538408   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.538639   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.538789   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.538975   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.539146   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:04.632308   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 00:14:01.961492   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.285410864s)
	I1108 00:14:01.961529   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.165604   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.235655   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:02.352126   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:02.352212   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.370538   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:02.884696   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.384139   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:03.884529   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.384134   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.884877   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:04.913244   51228 api_server.go:72] duration metric: took 2.56112461s to wait for apiserver process to appear ...
	I1108 00:14:04.913273   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:04.913295   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:04.657542   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1108 00:14:04.682815   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 00:14:04.709405   50022 provision.go:86] duration metric: configureAuth took 338.902281ms
	I1108 00:14:04.709439   50022 buildroot.go:189] setting minikube options for container-runtime
	I1108 00:14:04.709651   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:14:04.709741   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:04.713141   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713520   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:04.713561   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:04.713718   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:04.713923   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714108   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:04.714259   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:04.714497   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:04.714885   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:04.714905   50022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 00:14:05.055346   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 00:14:05.055427   50022 machine.go:91] provisioned docker machine in 969.247821ms
	I1108 00:14:05.055446   50022 start.go:300] post-start starting for "old-k8s-version-590541" (driver="kvm2")
	I1108 00:14:05.055459   50022 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 00:14:05.055493   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.055841   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 00:14:05.055895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.058959   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059423   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.059457   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.059601   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.059775   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.059895   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.060042   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.151543   50022 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 00:14:05.155876   50022 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 00:14:05.155902   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/addons for local assets ...
	I1108 00:14:05.155969   50022 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9647/.minikube/files for local assets ...
	I1108 00:14:05.156056   50022 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem -> 168482.pem in /etc/ssl/certs
	I1108 00:14:05.156229   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 00:14:05.165742   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:05.190622   50022 start.go:303] post-start completed in 135.159333ms
	I1108 00:14:05.190648   50022 fix.go:56] fixHost completed within 22.904612851s
	I1108 00:14:05.190673   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.193716   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194165   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.194195   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.194480   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.194725   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.194929   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.195106   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.195260   50022 main.go:141] libmachine: Using SSH client type: native
	I1108 00:14:05.195755   50022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.50.49 22 <nil> <nil>}
	I1108 00:14:05.195778   50022 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 00:14:05.326443   50022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699402445.269657345
	
	I1108 00:14:05.326467   50022 fix.go:206] guest clock: 1699402445.269657345
	I1108 00:14:05.326476   50022 fix.go:219] Guest: 2023-11-08 00:14:05.269657345 +0000 UTC Remote: 2023-11-08 00:14:05.190652611 +0000 UTC m=+370.589908297 (delta=79.004734ms)
	I1108 00:14:05.326524   50022 fix.go:190] guest clock delta is within tolerance: 79.004734ms
	I1108 00:14:05.326531   50022 start.go:83] releasing machines lock for "old-k8s-version-590541", held for 23.040527062s
	I1108 00:14:05.326558   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.326845   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:05.329775   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330225   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.330254   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.330447   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331102   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331338   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:14:05.331424   50022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 00:14:05.331467   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.331584   50022 ssh_runner.go:195] Run: cat /version.json
	I1108 00:14:05.331610   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:14:05.334586   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.334817   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335125   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335182   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335225   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335307   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:05.335339   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:05.335418   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335536   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:14:05.335603   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.335774   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:14:05.335783   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.335906   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:14:05.336063   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:14:05.423679   50022 ssh_runner.go:195] Run: systemctl --version
	I1108 00:14:05.446956   50022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 00:14:05.598713   50022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 00:14:05.605558   50022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 00:14:05.605641   50022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 00:14:05.620183   50022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 00:14:05.620211   50022 start.go:472] detecting cgroup driver to use...
	I1108 00:14:05.620277   50022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 00:14:05.635981   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 00:14:05.649637   50022 docker.go:203] disabling cri-docker service (if available) ...
	I1108 00:14:05.649699   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 00:14:05.664232   50022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 00:14:05.678205   50022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 00:14:05.791991   50022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 00:14:05.925002   50022 docker.go:219] disabling docker service ...
	I1108 00:14:05.925135   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 00:14:05.939853   50022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 00:14:05.955518   50022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 00:14:06.074872   50022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 00:14:06.189371   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 00:14:06.202247   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 00:14:06.219012   50022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1108 00:14:06.219082   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.229837   50022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1108 00:14:06.229911   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.239769   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.248633   50022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 00:14:06.257717   50022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 00:14:06.268893   50022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 00:14:06.277427   50022 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 00:14:06.277495   50022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 00:14:06.290771   50022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 00:14:06.299918   50022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 00:14:06.421038   50022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 00:14:06.587544   50022 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 00:14:06.587624   50022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 00:14:06.592726   50022 start.go:540] Will wait 60s for crictl version
	I1108 00:14:06.592781   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:06.596695   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 00:14:06.637642   50022 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1108 00:14:06.637733   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.690026   50022 ssh_runner.go:195] Run: crio --version
	I1108 00:14:06.740455   50022 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1108 00:14:03.266720   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.764837   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:05.322160   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:07.329491   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:06.741799   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetIP
	I1108 00:14:06.744301   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744599   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:14:06.744630   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:14:06.744861   50022 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1108 00:14:06.749385   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:06.762645   50022 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1108 00:14:06.762732   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:06.804386   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:06.804458   50022 ssh_runner.go:195] Run: which lz4
	I1108 00:14:06.808948   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1108 00:14:06.813319   50022 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1108 00:14:06.813355   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1108 00:14:08.476578   50022 crio.go:444] Took 1.667668 seconds to copy over tarball
	I1108 00:14:08.476646   50022 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1108 00:14:09.078810   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.078843   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.078859   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.140049   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:09.140083   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:09.641000   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:09.647216   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:09.647247   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.140446   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.148995   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 00:14:10.149028   51228 api_server.go:103] status: https://192.168.72.116:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 00:14:10.640719   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:14:10.649076   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:14:10.660508   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:14:10.660545   51228 api_server.go:131] duration metric: took 5.747263547s to wait for apiserver health ...
	I1108 00:14:10.660556   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:14:10.660566   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:10.662644   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:10.664069   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:10.682131   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:10.709582   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:10.725779   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:14:10.725840   51228 system_pods.go:61] "coredns-5dd5756b68-rz9t4" [d7b24f41-ed9e-4b07-991b-8587f49d7902] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 00:14:10.725854   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [f58b5fbb-a565-4d47-8b3d-ea62169dc0fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 00:14:10.725868   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [d0c3391c-679f-49ad-a6ff-ef62d74a62ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 00:14:10.725882   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [33f54c9b-cc67-4662-8db9-c735fde4d9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 00:14:10.725903   51228 system_pods.go:61] "kube-proxy-z7b8g" [079a28b1-dbad-4e62-a9ea-b667206433cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 00:14:10.725914   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [629f940b-6d2a-4c3c-8a11-2805dc2c04d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 00:14:10.725927   51228 system_pods.go:61] "metrics-server-57f55c9bc5-nlhpn" [f5d69cb1-4266-45fc-9bab-57053f915aa0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:14:10.725941   51228 system_pods.go:61] "storage-provisioner" [fb6541da-3ed3-4abb-b534-643bb5faf7d3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 00:14:10.725953   51228 system_pods.go:74] duration metric: took 16.346941ms to wait for pod list to return data ...
	I1108 00:14:10.725965   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:10.730466   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:10.730555   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:10.730574   51228 node_conditions.go:105] duration metric: took 4.602969ms to run NodePressure ...
	I1108 00:14:10.730595   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:07.772448   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:10.267241   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:09.824633   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.829090   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.015104   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:11.781938   50022 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.305246635s)
	I1108 00:14:11.781979   50022 crio.go:451] Took 3.305377 seconds to extract the tarball
	I1108 00:14:11.781999   50022 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1108 00:14:11.837911   50022 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 00:14:11.907599   50022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1108 00:14:11.907634   50022 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 00:14:11.907702   50022 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.907965   50022 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.907983   50022 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.907966   50022 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.908257   50022 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.908131   50022 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.908365   50022 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1108 00:14:11.909163   50022 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:11.909239   50022 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:11.909251   50022 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:11.909332   50022 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:11.909171   50022 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:11.909397   50022 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:11.909435   50022 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:11.909625   50022 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1108 00:14:12.040043   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.042004   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1108 00:14:12.047478   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.051016   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.095045   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.126645   50022 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1108 00:14:12.126718   50022 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.126788   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.133035   50022 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1108 00:14:12.133078   50022 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1108 00:14:12.133120   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.164621   50022 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1108 00:14:12.164686   50022 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.164754   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.182223   50022 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1108 00:14:12.182267   50022 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.182318   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201151   50022 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1108 00:14:12.201196   50022 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.201244   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.201255   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1108 00:14:12.201306   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1108 00:14:12.201305   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1108 00:14:12.201341   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1108 00:14:12.203375   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.208529   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.341873   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1108 00:14:12.341901   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1108 00:14:12.341954   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1108 00:14:12.341960   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1108 00:14:12.356561   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1108 00:14:12.356663   50022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.361927   50022 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1108 00:14:12.361962   50022 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.362023   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.382770   50022 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1108 00:14:12.382819   50022 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.382864   50022 ssh_runner.go:195] Run: which crictl
	I1108 00:14:12.406169   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1108 00:14:12.406213   50022 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1108 00:14:12.406228   50022 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406273   50022 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1108 00:14:12.406313   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1108 00:14:12.406274   50022 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1108 00:14:12.863910   50022 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:14:14.488498   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0: (2.082152502s)
	I1108 00:14:14.488536   50022 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.082234083s)
	I1108 00:14:14.488548   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1108 00:14:14.488571   50022 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1108 00:14:14.488623   50022 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0: (2.082249259s)
	I1108 00:14:14.488666   50022 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1108 00:14:14.488711   50022 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.624766966s)
	I1108 00:14:14.488762   50022 cache_images.go:92] LoadImages completed in 2.581114029s
	W1108 00:14:14.488842   50022 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-9647/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I1108 00:14:14.488915   50022 ssh_runner.go:195] Run: crio config
	I1108 00:14:14.557127   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:14.557155   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:14.557176   50022 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 00:14:14.557204   50022 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.49 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-590541 NodeName:old-k8s-version-590541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1108 00:14:14.557391   50022 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-590541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-590541
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.49:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 00:14:14.557508   50022 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-590541 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1108 00:14:14.557579   50022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1108 00:14:14.568423   50022 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 00:14:14.568501   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 00:14:14.578581   50022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1108 00:14:14.596389   50022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 00:14:14.613956   50022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I1108 00:14:14.631988   50022 ssh_runner.go:195] Run: grep 192.168.50.49	control-plane.minikube.internal$ /etc/hosts
	I1108 00:14:14.636236   50022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 00:14:14.648849   50022 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541 for IP: 192.168.50.49
	I1108 00:14:14.648888   50022 certs.go:190] acquiring lock for shared ca certs: {Name:mk4160b58968d653e0285c6473ef529f2f32988c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:14:14.649071   50022 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key
	I1108 00:14:14.649126   50022 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key
	I1108 00:14:14.649231   50022 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.key
	I1108 00:14:14.649312   50022 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key.5b7c76e3
	I1108 00:14:14.649375   50022 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key
	I1108 00:14:14.649542   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem (1338 bytes)
	W1108 00:14:14.649587   50022 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848_empty.pem, impossibly tiny 0 bytes
	I1108 00:14:14.649597   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 00:14:14.649636   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/ca.pem (1078 bytes)
	I1108 00:14:14.649677   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/cert.pem (1123 bytes)
	I1108 00:14:14.649714   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/certs/home/jenkins/minikube-integration/17585-9647/.minikube/certs/key.pem (1675 bytes)
	I1108 00:14:14.649771   50022 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem (1708 bytes)
	I1108 00:14:11.058474   51228 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064805   51228 kubeadm.go:787] kubelet initialised
	I1108 00:14:11.064852   51228 kubeadm.go:788] duration metric: took 6.346592ms waiting for restarted kubelet to initialise ...
	I1108 00:14:11.064863   51228 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:14:11.073499   51228 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.089759   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089791   51228 pod_ready.go:81] duration metric: took 16.257238ms waiting for pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.089803   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "coredns-5dd5756b68-rz9t4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.089811   51228 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.100580   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100605   51228 pod_ready.go:81] duration metric: took 10.783802ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.100615   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.100621   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.113797   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113826   51228 pod_ready.go:81] duration metric: took 13.195367ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.113838   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.113847   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.124704   51228 pod_ready.go:97] node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124736   51228 pod_ready.go:81] duration metric: took 10.87946ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	E1108 00:14:11.124750   51228 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-039263" hosting pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-039263" has status "Ready":"False"
	I1108 00:14:11.124760   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915650   51228 pod_ready.go:92] pod "kube-proxy-z7b8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:11.915674   51228 pod_ready.go:81] duration metric: took 790.904941ms waiting for pod "kube-proxy-z7b8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:11.915686   51228 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:14.011244   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:12.537889   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.767882   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:16.322840   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.323955   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:14.650662   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 00:14:14.682536   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 00:14:14.708618   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 00:14:14.737947   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 00:14:14.768365   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 00:14:14.795469   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 00:14:14.824086   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 00:14:14.851375   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 00:14:14.878638   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 00:14:14.906647   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/certs/16848.pem --> /usr/share/ca-certificates/16848.pem (1338 bytes)
	I1108 00:14:14.933316   50022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/ssl/certs/168482.pem --> /usr/share/ca-certificates/168482.pem (1708 bytes)
	I1108 00:14:14.961937   50022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 00:14:14.980167   50022 ssh_runner.go:195] Run: openssl version
	I1108 00:14:14.986053   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16848.pem && ln -fs /usr/share/ca-certificates/16848.pem /etc/ssl/certs/16848.pem"
	I1108 00:14:14.996201   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001410   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:12 /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.001490   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16848.pem
	I1108 00:14:15.008681   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16848.pem /etc/ssl/certs/51391683.0"
	I1108 00:14:15.022034   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168482.pem && ln -fs /usr/share/ca-certificates/168482.pem /etc/ssl/certs/168482.pem"
	I1108 00:14:15.031992   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037854   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:12 /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.037910   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168482.pem
	I1108 00:14:15.045107   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168482.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 00:14:15.057464   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 00:14:15.070137   50022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075848   50022 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.075917   50022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 00:14:15.083414   50022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 00:14:15.094499   50022 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 00:14:15.099437   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 00:14:15.105940   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 00:14:15.112527   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 00:14:15.118429   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 00:14:15.124769   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 00:14:15.130975   50022 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 00:14:15.136772   50022 kubeadm.go:404] StartCluster: {Name:old-k8s-version-590541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-590541 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 00:14:15.136903   50022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 00:14:15.136952   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:15.184018   50022 cri.go:89] found id: ""
	I1108 00:14:15.184095   50022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 00:14:15.196900   50022 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 00:14:15.196924   50022 kubeadm.go:636] restartCluster start
	I1108 00:14:15.196994   50022 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 00:14:15.208810   50022 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.210399   50022 kubeconfig.go:92] found "old-k8s-version-590541" server: "https://192.168.50.49:8443"
	I1108 00:14:15.214114   50022 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 00:14:15.223586   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.223644   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.234506   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.234525   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.234565   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.244971   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:15.745626   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:15.745698   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:15.757830   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.246012   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.246090   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.258583   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.745965   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:16.746045   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:16.758317   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.245985   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.246087   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.257615   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:17.745646   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:17.745715   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:17.757591   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.245666   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.245773   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.258225   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:18.745765   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:18.745842   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:18.756699   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:19.245946   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.246016   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.258255   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:16.222461   51228 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:18.722269   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:14:18.722291   51228 pod_ready.go:81] duration metric: took 6.806598217s waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:18.722300   51228 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	I1108 00:14:20.739081   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:17.264976   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.265242   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:21.265825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:20.822592   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.321115   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:19.745997   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:19.746135   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:19.757885   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.245884   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.245988   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.258408   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:20.745963   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:20.746035   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:20.757892   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.246052   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.246133   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.258401   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:21.745947   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:21.746040   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:21.759160   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.246004   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.246075   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.258859   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.745787   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:22.745889   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:22.758099   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.245961   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.246068   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.258810   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:23.745167   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:23.745248   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:23.757093   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:24.245690   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.245751   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.258264   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:22.739380   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.739502   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:23.766235   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:26.264779   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:25.322215   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:27.322896   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:24.745944   50022 api_server.go:166] Checking apiserver status ...
	I1108 00:14:24.746024   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1108 00:14:24.759229   50022 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1108 00:14:25.224130   50022 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1108 00:14:25.224188   50022 kubeadm.go:1128] stopping kube-system containers ...
	I1108 00:14:25.224207   50022 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 00:14:25.224267   50022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 00:14:25.271348   50022 cri.go:89] found id: ""
	I1108 00:14:25.271418   50022 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 00:14:25.287540   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:14:25.296398   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:14:25.296452   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305111   50022 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 00:14:25.305137   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:25.434385   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.361847   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.561621   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.667973   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:26.798155   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:14:26.798240   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:26.822210   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.335493   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:27.836175   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.336398   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.836400   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:14:28.862790   50022 api_server.go:72] duration metric: took 2.064638513s to wait for apiserver process to appear ...
	I1108 00:14:28.862814   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:14:28.862827   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:26.740013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.740958   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:28.266931   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:30.765036   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:29.827237   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:32.323375   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.863452   50022 api_server.go:269] stopped: https://192.168.50.49:8443/healthz: Get "https://192.168.50.49:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 00:14:33.863495   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:34.513495   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 00:14:34.513530   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 00:14:31.240440   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:33.739764   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.014492   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.020991   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.021019   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:35.514559   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:35.521451   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1108 00:14:35.521475   50022 api_server.go:103] status: https://192.168.50.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1108 00:14:36.014620   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:14:36.021243   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:14:36.029191   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:14:36.029214   50022 api_server.go:131] duration metric: took 7.166394703s to wait for apiserver health ...
	I1108 00:14:36.029225   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:14:36.029232   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:14:36.030800   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:14:32.765436   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:35.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:34.825199   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.322438   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:36.032078   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:14:36.042827   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:14:36.062239   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:14:36.070373   50022 system_pods.go:59] 7 kube-system pods found
	I1108 00:14:36.070404   50022 system_pods.go:61] "coredns-5644d7b6d9-cmx8s" [510a3ae2-abff-40f9-8605-7fd6cc5316de] Running
	I1108 00:14:36.070414   50022 system_pods.go:61] "etcd-old-k8s-version-590541" [4597d43f-d424-4591-8a5c-6e4a7d60bb2b] Running
	I1108 00:14:36.070420   50022 system_pods.go:61] "kube-apiserver-old-k8s-version-590541" [353c1157-7cac-4809-91ea-30745ecbc10c] Running
	I1108 00:14:36.070427   50022 system_pods.go:61] "kube-controller-manager-old-k8s-version-590541" [30679f8f-aa28-4349-ada1-97af45c0c065] Running
	I1108 00:14:36.070432   50022 system_pods.go:61] "kube-proxy-r8p96" [21ac95e4-595f-4520-8174-ef5e1334c1be] Running
	I1108 00:14:36.070437   50022 system_pods.go:61] "kube-scheduler-old-k8s-version-590541" [f406d277-d786-417a-9428-8433143db81c] Running
	I1108 00:14:36.070443   50022 system_pods.go:61] "storage-provisioner" [26f85033-bd24-4332-ba8d-1aed49559417] Running
	I1108 00:14:36.070452   50022 system_pods.go:74] duration metric: took 8.188793ms to wait for pod list to return data ...
	I1108 00:14:36.070461   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:14:36.075209   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:14:36.075242   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:14:36.075259   50022 node_conditions.go:105] duration metric: took 4.788324ms to run NodePressure ...
	I1108 00:14:36.075286   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 00:14:36.310748   50022 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 00:14:36.319886   50022 retry.go:31] will retry after 259.644928ms: kubelet not initialised
	I1108 00:14:36.584728   50022 retry.go:31] will retry after 259.541836ms: kubelet not initialised
	I1108 00:14:36.851013   50022 retry.go:31] will retry after 319.229418ms: kubelet not initialised
	I1108 00:14:37.192544   50022 retry.go:31] will retry after 949.166954ms: kubelet not initialised
	I1108 00:14:38.149087   50022 retry.go:31] will retry after 1.159461481s: kubelet not initialised
	I1108 00:14:39.313777   50022 retry.go:31] will retry after 1.441288405s: kubelet not initialised
	I1108 00:14:36.240206   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:38.240974   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.739451   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:37.266643   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.267727   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.765636   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:39.323180   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:41.323278   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:43.821724   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:40.762380   50022 retry.go:31] will retry after 2.811416386s: kubelet not initialised
	I1108 00:14:43.579217   50022 retry.go:31] will retry after 4.427599597s: kubelet not initialised
	I1108 00:14:42.739823   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.238841   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:44.266015   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:46.766564   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:45.822389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:47.822637   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:48.011401   50022 retry.go:31] will retry after 9.583320686s: kubelet not initialised
	I1108 00:14:47.239708   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.739520   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:49.264876   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.265467   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:50.321858   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:52.823189   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:51.740005   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:54.239137   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:53.267904   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.767709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:55.321381   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:57.600096   50022 retry.go:31] will retry after 8.628668417s: kubelet not initialised
	I1108 00:14:56.242527   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.740775   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.742908   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:58.263898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:00.264487   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:14:59.822276   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.322959   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.744271   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:05.239364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:02.764787   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.767529   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:04.821706   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.822611   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:08.822950   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:06.235557   50022 retry.go:31] will retry after 18.967803661s: kubelet not initialised
	I1108 00:15:07.239957   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.243640   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:07.268913   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:09.765546   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:10.823397   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:13.320774   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:11.741381   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.239143   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:12.265009   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:14.265329   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.265470   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:15.322148   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:17.821371   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:16.740364   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.742058   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:18.267349   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:20.763380   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:19.821495   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.822583   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:21.239196   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:23.239716   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.740472   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:22.764934   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.264695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:24.322074   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:26.324255   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:28.823261   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:25.208456   50022 kubeadm.go:787] kubelet initialised
	I1108 00:15:25.208482   50022 kubeadm.go:788] duration metric: took 48.897709945s waiting for restarted kubelet to initialise ...
	I1108 00:15:25.208492   50022 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:15:25.213730   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220419   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.220444   50022 pod_ready.go:81] duration metric: took 6.688227ms waiting for pod "coredns-5644d7b6d9-cmx8s" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.220455   50022 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225713   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.225734   50022 pod_ready.go:81] duration metric: took 5.271879ms waiting for pod "coredns-5644d7b6d9-n42t2" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.225742   50022 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231081   50022 pod_ready.go:92] pod "etcd-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.231102   50022 pod_ready.go:81] duration metric: took 5.353373ms waiting for pod "etcd-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.231113   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235653   50022 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.235676   50022 pod_ready.go:81] duration metric: took 4.554135ms waiting for pod "kube-apiserver-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.235687   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607677   50022 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:25.607702   50022 pod_ready.go:81] duration metric: took 372.006515ms waiting for pod "kube-controller-manager-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:25.607715   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007866   50022 pod_ready.go:92] pod "kube-proxy-r8p96" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.007901   50022 pod_ready.go:81] duration metric: took 400.175462ms waiting for pod "kube-proxy-r8p96" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.007915   50022 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.408998   50022 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace has status "Ready":"True"
	I1108 00:15:26.409023   50022 pod_ready.go:81] duration metric: took 401.100386ms waiting for pod "kube-scheduler-old-k8s-version-590541" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:26.409037   50022 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	I1108 00:15:28.714602   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.743907   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.242025   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:27.764799   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:29.765943   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:31.322316   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.821723   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:30.715349   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:33.213961   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.739648   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.238544   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:32.270073   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:34.764272   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.768065   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:36.322383   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:38.821688   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:35.215842   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.714618   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:37.239003   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.239229   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:39.266142   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.765225   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.822847   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.823419   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:40.214573   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:42.214623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:41.239832   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.740100   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:43.765773   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.767613   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:45.323162   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:47.323716   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:44.714312   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.714541   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.214939   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:46.238097   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.240079   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.740404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:48.264657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:50.266155   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:49.821171   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.821247   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.821754   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:51.715388   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.214072   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:53.239902   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.240606   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:52.764709   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:54.765802   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:55.821843   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.822037   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:56.214628   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:58.215873   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.739805   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.742442   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:57.264640   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.265598   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:01.269674   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:15:59.823743   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.321221   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:00.716761   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.717300   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:02.240157   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.740325   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:03.765956   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.266810   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:04.322200   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.325043   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.822004   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:05.214678   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:07.214757   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:06.741067   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.238455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:08.764592   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:10.764740   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.321882   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.323997   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:09.715347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:12.215814   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:11.238960   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.239188   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.239933   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:13.268590   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.767860   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:15.822286   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.323447   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:14.715001   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.214864   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:19.220945   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:17.743653   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.239877   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:18.267403   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.765825   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:20.828982   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:23.322508   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:21.715604   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.215532   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.240232   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:24.240410   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:22.767921   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.266374   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:25.821672   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.323033   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.715605   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.215673   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:26.240493   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:28.739795   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:27.268851   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:29.765296   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:30.822234   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.822653   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:31.714216   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.714677   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:33.238984   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.239828   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:32.264549   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.765297   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:34.823243   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.321349   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:35.715073   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.715879   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.240347   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.739526   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:37.265284   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.764898   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:39.322588   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:41.822017   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:40.214804   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.714783   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.238649   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.238830   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:42.265404   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.266352   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.763687   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:44.321389   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.322294   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.822670   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:45.215415   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:47.715215   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:46.239884   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.740698   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:50.740725   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:48.765820   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.265744   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:51.321664   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.321945   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:49.715720   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:52.215540   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.239897   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.241013   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:53.764035   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.767704   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:55.324156   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.821380   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:54.716014   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.213472   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.216084   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:57.740250   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.740808   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:58.264915   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:00.764064   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:16:59.823358   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.824897   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.827668   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:01.714273   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:03.714538   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.238718   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:04.239300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:02.766695   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:05.268491   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.321926   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.822906   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.215268   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:08.215344   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:06.740893   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.240404   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:07.764370   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:09.764952   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.765807   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.823030   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.320640   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:10.715494   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.214139   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:11.741308   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:13.741849   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:14.265117   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.265550   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.322703   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.822360   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:15.214808   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:17.214944   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:19.215663   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:16.239627   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.241991   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.742074   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:18.764043   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.764244   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:20.322245   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:22.821679   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:21.715000   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.715813   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.240800   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.741203   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:23.264974   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:25.267122   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:24.823144   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.322674   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:26.215099   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.215710   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:28.242151   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.741098   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:27.765060   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.266360   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:29.821467   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:31.822093   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:30.714747   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.716931   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:33.241199   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.744300   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:32.765221   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.766163   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:34.320569   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:36.321680   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.321803   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:35.215458   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.715660   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:38.241103   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.241689   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:37.264893   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:39.264980   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:41.764589   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.323069   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.822323   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:40.214357   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.215838   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:42.738943   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.738995   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.265516   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.764435   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.827347   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:47.321911   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:44.715762   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.716679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.214899   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:46.739838   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.740204   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:48.766668   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.266657   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:49.822604   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.823333   50505 pod_ready.go:102] pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.935354   50505 pod_ready.go:81] duration metric: took 4m0.000854035s waiting for pod "metrics-server-57f55c9bc5-th89c" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:51.935397   50505 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:51.935438   50505 pod_ready.go:38] duration metric: took 4m11.589382956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:51.935470   50505 kubeadm.go:640] restartCluster took 4m31.32204509s
	W1108 00:17:51.935533   50505 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:51.935560   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:17:51.715171   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.716530   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:51.244682   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.741272   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.743900   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:53.765757   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:55.766672   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:56.218347   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.715621   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.246553   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:00.740366   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.265496   50613 pod_ready.go:102] pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:17:58.958296   50613 pod_ready.go:81] duration metric: took 4m0.000224971s waiting for pod "metrics-server-57f55c9bc5-s7ldx" in "kube-system" namespace to be "Ready" ...
	E1108 00:17:58.958324   50613 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:17:58.958349   50613 pod_ready.go:38] duration metric: took 4m11.678298333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:17:58.958373   50613 kubeadm.go:640] restartCluster took 4m32.361691152s
	W1108 00:17:58.958429   50613 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:17:58.958455   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:01.214685   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.216848   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:03.239882   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:05.739403   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:06.321352   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.385768547s)
	I1108 00:18:06.321435   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:06.335385   50505 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:06.345310   50505 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:06.355261   50505 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:06.355301   50505 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:06.570938   50505 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:05.715384   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.716056   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:07.739455   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.740028   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:09.716612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:12.215477   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:11.742123   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:14.242024   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:15.847386   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.888899647s)
	I1108 00:18:15.847471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:15.865800   50613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:15.877857   50613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:15.888952   50613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:15.889014   50613 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:16.126155   50613 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:17.730060   50505 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:17.730164   50505 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:17.730282   50505 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:17.730411   50505 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:17.730564   50505 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:17.730648   50505 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:17.732613   50505 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:17.732709   50505 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:17.732788   50505 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:17.732916   50505 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:17.732995   50505 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:17.733104   50505 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:17.733186   50505 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:17.733265   50505 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:17.733344   50505 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:17.733429   50505 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:17.733526   50505 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:17.733572   50505 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:17.733640   50505 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:17.733699   50505 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:17.733763   50505 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:17.733838   50505 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:17.733905   50505 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:17.734002   50505 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:17.734088   50505 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:17.735708   50505 out.go:204]   - Booting up control plane ...
	I1108 00:18:17.735808   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:17.735898   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:17.735981   50505 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:17.736113   50505 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:17.736209   50505 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:17.736255   50505 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:17.736431   50505 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:17.736517   50505 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503639 seconds
	I1108 00:18:17.736637   50505 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:17.736779   50505 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:17.736873   50505 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:17.737093   50505 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-320390 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:17.737168   50505 kubeadm.go:322] [bootstrap-token] Using token: 8lntxi.1hule2axpc9kkhcs
	I1108 00:18:17.738763   50505 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:17.738904   50505 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:17.739014   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:17.739197   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:17.739364   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:17.739534   50505 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:17.739651   50505 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:17.739781   50505 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:17.739829   50505 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:17.739881   50505 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:17.739889   50505 kubeadm.go:322] 
	I1108 00:18:17.739956   50505 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:17.739964   50505 kubeadm.go:322] 
	I1108 00:18:17.740051   50505 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:17.740065   50505 kubeadm.go:322] 
	I1108 00:18:17.740094   50505 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:17.740165   50505 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:17.740229   50505 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:17.740239   50505 kubeadm.go:322] 
	I1108 00:18:17.740311   50505 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:17.740320   50505 kubeadm.go:322] 
	I1108 00:18:17.740375   50505 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:17.740385   50505 kubeadm.go:322] 
	I1108 00:18:17.740443   50505 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:17.740528   50505 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:17.740629   50505 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:17.740640   50505 kubeadm.go:322] 
	I1108 00:18:17.740733   50505 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:17.740840   50505 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:17.740860   50505 kubeadm.go:322] 
	I1108 00:18:17.740959   50505 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741077   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:17.741106   50505 kubeadm.go:322] 	--control-plane 
	I1108 00:18:17.741114   50505 kubeadm.go:322] 
	I1108 00:18:17.741207   50505 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:17.741221   50505 kubeadm.go:322] 
	I1108 00:18:17.741312   50505 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8lntxi.1hule2axpc9kkhcs \
	I1108 00:18:17.741435   50505 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:17.741451   50505 cni.go:84] Creating CNI manager for ""
	I1108 00:18:17.741460   50505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:17.742996   50505 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:17.744307   50505 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:17.800065   50505 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:17.844561   50505 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:17.844628   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:17.844636   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=no-preload-320390 minikube.k8s.io/updated_at=2023_11_08T00_18_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.268124   50505 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:18.268268   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.391271   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:18.999821   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:14.715492   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.716036   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:19.217395   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:16.739748   51228 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:18.722551   51228 pod_ready.go:81] duration metric: took 4m0.000232672s waiting for pod "metrics-server-57f55c9bc5-nlhpn" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:18.722600   51228 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:18:18.722616   51228 pod_ready.go:38] duration metric: took 4m7.657742468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:18.722637   51228 kubeadm.go:640] restartCluster took 4m28.262375275s
	W1108 00:18:18.722722   51228 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:18:18.722756   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:18:19.500069   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.000575   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.500545   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:20.999918   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.499960   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.000673   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:22.499811   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.000501   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:23.499942   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.000407   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:21.217427   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:23.715751   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:27.224428   50613 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:27.224497   50613 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:27.224589   50613 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:27.224720   50613 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:27.224916   50613 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:27.225019   50613 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:27.226893   50613 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:27.227001   50613 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:27.227091   50613 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:27.227201   50613 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:27.227279   50613 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:27.227365   50613 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:27.227433   50613 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:27.227517   50613 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:27.227602   50613 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:27.227719   50613 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:27.227808   50613 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:27.227864   50613 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:27.227938   50613 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:27.228013   50613 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:27.228102   50613 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:27.228186   50613 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:27.228264   50613 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:27.228387   50613 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:27.228479   50613 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:27.229827   50613 out.go:204]   - Booting up control plane ...
	I1108 00:18:27.229950   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:27.230032   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:27.230124   50613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:27.230265   50613 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:27.230387   50613 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:27.230447   50613 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:27.230699   50613 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:27.230810   50613 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503846 seconds
	I1108 00:18:27.230970   50613 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:27.231145   50613 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:27.231237   50613 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:27.231478   50613 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-253253 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:27.231573   50613 kubeadm.go:322] [bootstrap-token] Using token: vyjibp.12wjj754q6czu5uo
	I1108 00:18:27.233159   50613 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:27.233266   50613 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:27.233340   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:27.233454   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:27.233558   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:27.233693   50613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:27.233793   50613 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:27.233943   50613 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:27.234012   50613 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:27.234074   50613 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:27.234086   50613 kubeadm.go:322] 
	I1108 00:18:27.234174   50613 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:27.234191   50613 kubeadm.go:322] 
	I1108 00:18:27.234300   50613 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:27.234310   50613 kubeadm.go:322] 
	I1108 00:18:27.234337   50613 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:27.234388   50613 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:27.234432   50613 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:27.234436   50613 kubeadm.go:322] 
	I1108 00:18:27.234490   50613 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:27.234507   50613 kubeadm.go:322] 
	I1108 00:18:27.234567   50613 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:27.234577   50613 kubeadm.go:322] 
	I1108 00:18:27.234651   50613 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:27.234756   50613 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:27.234858   50613 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:27.234873   50613 kubeadm.go:322] 
	I1108 00:18:27.234959   50613 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:27.235056   50613 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:27.235066   50613 kubeadm.go:322] 
	I1108 00:18:27.235184   50613 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235334   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:27.235369   50613 kubeadm.go:322] 	--control-plane 
	I1108 00:18:27.235378   50613 kubeadm.go:322] 
	I1108 00:18:27.235476   50613 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:27.235487   50613 kubeadm.go:322] 
	I1108 00:18:27.235585   50613 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vyjibp.12wjj754q6czu5uo \
	I1108 00:18:27.235734   50613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:27.235751   50613 cni.go:84] Creating CNI manager for ""
	I1108 00:18:27.235759   50613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:27.237411   50613 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:24.499703   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:24.999659   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:25.499724   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.000534   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.500532   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.999903   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.500582   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.000156   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.500443   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.000019   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:26.213623   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:28.214432   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:29.500525   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.999698   50505 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.173272   50505 kubeadm.go:1081] duration metric: took 12.328709999s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:30.173304   50505 kubeadm.go:406] StartCluster complete in 5m9.613679996s
	I1108 00:18:30.173323   50505 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.173399   50505 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:30.175022   50505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:30.175277   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:30.175394   50505 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:30.175512   50505 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320390"
	I1108 00:18:30.175534   50505 addons.go:231] Setting addon storage-provisioner=true in "no-preload-320390"
	W1108 00:18:30.175546   50505 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:30.175591   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.175595   50505 config.go:182] Loaded profile config "no-preload-320390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:30.175648   50505 addons.go:69] Setting default-storageclass=true in profile "no-preload-320390"
	I1108 00:18:30.175669   50505 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320390"
	I1108 00:18:30.175856   50505 addons.go:69] Setting metrics-server=true in profile "no-preload-320390"
	I1108 00:18:30.175880   50505 addons.go:231] Setting addon metrics-server=true in "no-preload-320390"
	W1108 00:18:30.175890   50505 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:30.175932   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.176004   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176047   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176074   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176110   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.176255   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.176297   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.193487   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
	I1108 00:18:30.194065   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.194643   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I1108 00:18:30.194791   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.194809   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195197   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.195244   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195454   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I1108 00:18:30.195741   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.195758   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.195840   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.195975   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.196019   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.196254   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.196377   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.196401   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.196444   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.196747   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.197318   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.197365   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.200432   50505 addons.go:231] Setting addon default-storageclass=true in "no-preload-320390"
	W1108 00:18:30.200454   50505 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:30.200482   50505 host.go:66] Checking if "no-preload-320390" exists ...
	I1108 00:18:30.200858   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.200904   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.214840   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
	I1108 00:18:30.215335   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.215693   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.215710   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.216018   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.216163   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.216761   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I1108 00:18:30.217467   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.218005   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.218255   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.218276   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.218567   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.218686   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.218895   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I1108 00:18:30.219282   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.221453   50505 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:30.219887   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.220152   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.227122   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.227187   50505 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.227203   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:30.227220   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.229126   50505 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:30.227716   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.230458   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231018   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.231625   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.231640   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:30.231664   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.231663   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:30.231687   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.231871   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.232040   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.232130   50505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:30.232164   50505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:30.232167   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.234984   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235307   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.235327   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.235589   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.235819   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.236102   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.236409   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.248939   50505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I1108 00:18:30.249596   50505 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:30.250088   50505 main.go:141] libmachine: Using API Version  1
	I1108 00:18:30.250105   50505 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:30.250535   50505 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:30.250715   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetState
	I1108 00:18:30.252631   50505 main.go:141] libmachine: (no-preload-320390) Calling .DriverName
	I1108 00:18:30.252909   50505 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.252923   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:30.252941   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHHostname
	I1108 00:18:30.255926   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256320   50505 main.go:141] libmachine: (no-preload-320390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:d8:91", ip: ""} in network mk-no-preload-320390: {Iface:virbr3 ExpiryTime:2023-11-08 01:12:52 +0000 UTC Type:0 Mac:52:54:00:0f:d8:91 Iaid: IPaddr:192.168.61.176 Prefix:24 Hostname:no-preload-320390 Clientid:01:52:54:00:0f:d8:91}
	I1108 00:18:30.256354   50505 main.go:141] libmachine: (no-preload-320390) DBG | domain no-preload-320390 has defined IP address 192.168.61.176 and MAC address 52:54:00:0f:d8:91 in network mk-no-preload-320390
	I1108 00:18:30.256440   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHPort
	I1108 00:18:30.256639   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHKeyPath
	I1108 00:18:30.256795   50505 main.go:141] libmachine: (no-preload-320390) Calling .GetSSHUsername
	I1108 00:18:30.257009   50505 sshutil.go:53] new ssh client: &{IP:192.168.61.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/no-preload-320390/id_rsa Username:docker}
	I1108 00:18:30.299537   50505 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-320390" context rescaled to 1 replicas
	I1108 00:18:30.299586   50505 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.176 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:30.301520   50505 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:27.238758   50613 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:27.263679   50613 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:27.350198   50613 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:27.350271   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.350293   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=embed-certs-253253 minikube.k8s.io/updated_at=2023_11_08T00_18_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.409145   50613 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:27.761874   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:27.882030   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.495425   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:28.995764   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.495154   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:29.994859   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.495492   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.995328   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:31.495353   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:30.303227   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:30.426941   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:30.426964   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:30.450862   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:30.456250   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:30.482239   50505 node_ready.go:35] waiting up to 6m0s for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.482286   50505 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:30.493041   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:30.493073   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:30.542548   50505 node_ready.go:49] node "no-preload-320390" has status "Ready":"True"
	I1108 00:18:30.542579   50505 node_ready.go:38] duration metric: took 60.300148ms waiting for node "no-preload-320390" to be "Ready" ...
	I1108 00:18:30.542593   50505 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:30.554527   50505 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:30.554560   50505 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:30.648882   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:30.658134   50505 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:32.959227   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.50832393s)
	I1108 00:18:32.959242   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.502960333s)
	I1108 00:18:32.959281   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959287   50505 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.476976723s)
	I1108 00:18:32.959301   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959347   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959307   50505 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:32.959293   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959711   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959729   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959748   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.959761   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.959771   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959780   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.959795   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:32.959807   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:32.960123   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960137   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:32.960207   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:32.960229   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:32.960237   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.007609   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.007641   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.007926   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.007945   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.106167   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.284838   50505 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.626637787s)
	I1108 00:18:33.284900   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.284916   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285239   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285259   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285269   50505 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:33.285278   50505 main.go:141] libmachine: (no-preload-320390) Calling .Close
	I1108 00:18:33.285579   50505 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:33.285612   50505 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:33.285626   50505 addons.go:467] Verifying addon metrics-server=true in "no-preload-320390"
	I1108 00:18:33.285579   50505 main.go:141] libmachine: (no-preload-320390) DBG | Closing plugin on server side
	I1108 00:18:33.288563   50505 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:18:33.290062   50505 addons.go:502] enable addons completed in 3.114669599s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:18:30.231324   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:32.715318   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:33.473926   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.751140561s)
	I1108 00:18:33.473999   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:33.489630   51228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:18:33.501413   51228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:18:33.513531   51228 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:18:33.513588   51228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1108 00:18:33.767243   51228 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:18:31.995169   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.494991   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:32.995423   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.494761   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:33.995099   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.494829   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:34.995699   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.495034   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.995563   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:36.494752   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:35.563227   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:37.563703   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:34.715399   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.717212   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:39.215769   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:36.995285   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.495447   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:37.995529   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.494898   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:38.995450   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.494831   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:39.994880   50613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:40.097031   50613 kubeadm.go:1081] duration metric: took 12.746819294s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:40.097074   50613 kubeadm.go:406] StartCluster complete in 5m13.552864243s
	I1108 00:18:40.097102   50613 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.097182   50613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:40.099232   50613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:40.099513   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:40.099522   50613 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:40.099603   50613 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-253253"
	I1108 00:18:40.099612   50613 addons.go:69] Setting default-storageclass=true in profile "embed-certs-253253"
	I1108 00:18:40.099625   50613 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-253253"
	I1108 00:18:40.099626   50613 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-253253"
	W1108 00:18:40.099635   50613 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:40.099675   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.099724   50613 config.go:182] Loaded profile config "embed-certs-253253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:40.099769   50613 addons.go:69] Setting metrics-server=true in profile "embed-certs-253253"
	I1108 00:18:40.099783   50613 addons.go:231] Setting addon metrics-server=true in "embed-certs-253253"
	W1108 00:18:40.099791   50613 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:40.099827   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.100063   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100064   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100085   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100086   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.100199   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.100229   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.117281   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I1108 00:18:40.117806   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.118339   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.118364   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.118717   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.118761   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1108 00:18:40.119093   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.119311   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.119334   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.119497   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.119520   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.119668   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1108 00:18:40.119841   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.119970   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.120403   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.120436   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.120443   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.120456   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.120895   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.121048   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.123728   50613 addons.go:231] Setting addon default-storageclass=true in "embed-certs-253253"
	W1108 00:18:40.123746   50613 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:40.123774   50613 host.go:66] Checking if "embed-certs-253253" exists ...
	I1108 00:18:40.124049   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.124073   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.139787   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39437
	I1108 00:18:40.140217   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.140776   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.140799   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.141358   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.143152   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I1108 00:18:40.143448   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.144341   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.145156   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.145175   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.145536   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.145695   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.146126   50613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:40.146151   50613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:40.147863   50613 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:40.149252   50613 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.149270   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:40.149288   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.149701   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41685
	I1108 00:18:40.150096   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.150599   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.150613   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.151053   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.151223   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.152047   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152462   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.152476   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.152718   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.152834   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.152927   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.153008   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.153394   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.155041   50613 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:40.156603   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:40.156625   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:40.156642   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.159550   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.159952   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.159973   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.160151   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.160294   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.160403   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.160505   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.162863   50613 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-253253" context rescaled to 1 replicas
	I1108 00:18:40.162890   50613 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:40.164733   50613 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:40.166082   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:40.167562   50613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1108 00:18:40.167938   50613 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:40.168414   50613 main.go:141] libmachine: Using API Version  1
	I1108 00:18:40.168433   50613 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:40.168805   50613 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:40.169056   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetState
	I1108 00:18:40.170751   50613 main.go:141] libmachine: (embed-certs-253253) Calling .DriverName
	I1108 00:18:40.171377   50613 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.171389   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:40.171402   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHHostname
	I1108 00:18:40.174508   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.174826   50613 main.go:141] libmachine: (embed-certs-253253) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:6e:cb", ip: ""} in network mk-embed-certs-253253: {Iface:virbr1 ExpiryTime:2023-11-08 01:13:12 +0000 UTC Type:0 Mac:52:54:00:1a:6e:cb Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:embed-certs-253253 Clientid:01:52:54:00:1a:6e:cb}
	I1108 00:18:40.174859   50613 main.go:141] libmachine: (embed-certs-253253) DBG | domain embed-certs-253253 has defined IP address 192.168.39.159 and MAC address 52:54:00:1a:6e:cb in network mk-embed-certs-253253
	I1108 00:18:40.175035   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHPort
	I1108 00:18:40.175182   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHKeyPath
	I1108 00:18:40.175341   50613 main.go:141] libmachine: (embed-certs-253253) Calling .GetSSHUsername
	I1108 00:18:40.175467   50613 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/embed-certs-253253/id_rsa Username:docker}
	I1108 00:18:40.387003   50613 node_ready.go:35] waiting up to 6m0s for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.387126   50613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:40.398413   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:40.398489   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:40.400162   50613 node_ready.go:49] node "embed-certs-253253" has status "Ready":"True"
	I1108 00:18:40.400189   50613 node_ready.go:38] duration metric: took 13.150355ms waiting for node "embed-certs-253253" to be "Ready" ...
	I1108 00:18:40.400204   50613 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:40.416263   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:40.420346   50613 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:40.441486   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:40.468701   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:40.468731   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:40.546438   50613 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:40.546475   50613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:40.620999   50613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:41.963134   50613 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.575984932s)
	I1108 00:18:41.963222   50613 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1108 00:18:41.963099   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.546802194s)
	I1108 00:18:41.963311   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963342   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.963771   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.963821   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.963843   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.963862   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.964176   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.964202   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:41.964188   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.997903   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:41.997987   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:41.998341   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:41.998428   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:41.998487   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.447761   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.006222409s)
	I1108 00:18:42.447810   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.447824   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.448092   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.448109   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.448110   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.448127   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.448143   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.449994   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.450013   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.450027   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.484250   50613 pod_ready.go:102] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:42.788997   50613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.167954058s)
	I1108 00:18:42.789042   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789057   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789342   50613 main.go:141] libmachine: (embed-certs-253253) DBG | Closing plugin on server side
	I1108 00:18:42.789395   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789416   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789427   50613 main.go:141] libmachine: Making call to close driver server
	I1108 00:18:42.789437   50613 main.go:141] libmachine: (embed-certs-253253) Calling .Close
	I1108 00:18:42.789673   50613 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:18:42.789698   50613 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:18:42.789709   50613 addons.go:467] Verifying addon metrics-server=true in "embed-certs-253253"
	I1108 00:18:42.792162   50613 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:39.563860   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.565166   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:44.063902   50505 pod_ready.go:102] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:41.216274   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:43.717636   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:45.631283   51228 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1108 00:18:45.631354   51228 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:18:45.631464   51228 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:18:45.631583   51228 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:18:45.631736   51228 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:18:45.631848   51228 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:18:45.633488   51228 out.go:204]   - Generating certificates and keys ...
	I1108 00:18:45.633579   51228 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:18:45.633656   51228 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:18:45.633756   51228 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:18:45.633840   51228 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:18:45.633947   51228 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:18:45.634041   51228 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:18:45.634140   51228 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:18:45.634244   51228 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:18:45.634357   51228 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:18:45.634458   51228 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:18:45.634541   51228 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:18:45.634625   51228 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:18:45.634713   51228 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:18:45.634781   51228 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:18:45.634865   51228 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:18:45.634935   51228 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:18:45.635044   51228 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:18:45.635133   51228 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:18:45.636666   51228 out.go:204]   - Booting up control plane ...
	I1108 00:18:45.636755   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:18:45.636862   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:18:45.636939   51228 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:18:45.637065   51228 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:18:45.637164   51228 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:18:45.637221   51228 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1108 00:18:45.637410   51228 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:18:45.637479   51228 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005347 seconds
	I1108 00:18:45.637583   51228 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:18:45.637710   51228 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:18:45.637782   51228 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:18:45.637961   51228 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-039263 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 00:18:45.638007   51228 kubeadm.go:322] [bootstrap-token] Using token: ub1ww5.kh6zrwfrcg8jc9rc
	I1108 00:18:45.639491   51228 out.go:204]   - Configuring RBAC rules ...
	I1108 00:18:45.639627   51228 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:18:45.639743   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 00:18:45.639918   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:18:45.640060   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:18:45.640240   51228 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:18:45.640344   51228 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:18:45.640487   51228 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 00:18:45.640546   51228 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:18:45.640625   51228 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:18:45.640643   51228 kubeadm.go:322] 
	I1108 00:18:45.640726   51228 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:18:45.640737   51228 kubeadm.go:322] 
	I1108 00:18:45.640850   51228 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:18:45.640860   51228 kubeadm.go:322] 
	I1108 00:18:45.640891   51228 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:18:45.640968   51228 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:18:45.641042   51228 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:18:45.641048   51228 kubeadm.go:322] 
	I1108 00:18:45.641124   51228 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1108 00:18:45.641137   51228 kubeadm.go:322] 
	I1108 00:18:45.641193   51228 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 00:18:45.641204   51228 kubeadm.go:322] 
	I1108 00:18:45.641266   51228 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:18:45.641372   51228 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:18:45.641485   51228 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:18:45.641493   51228 kubeadm.go:322] 
	I1108 00:18:45.641589   51228 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 00:18:45.641704   51228 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:18:45.641714   51228 kubeadm.go:322] 
	I1108 00:18:45.641815   51228 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.641939   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:18:45.641971   51228 kubeadm.go:322] 	--control-plane 
	I1108 00:18:45.641979   51228 kubeadm.go:322] 
	I1108 00:18:45.642084   51228 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:18:45.642093   51228 kubeadm.go:322] 
	I1108 00:18:45.642216   51228 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ub1ww5.kh6zrwfrcg8jc9rc \
	I1108 00:18:45.642356   51228 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:18:45.642372   51228 cni.go:84] Creating CNI manager for ""
	I1108 00:18:45.642379   51228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:18:45.644712   51228 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:18:45.646211   51228 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:18:45.672621   51228 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:18:45.700061   51228 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:18:45.700142   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.700153   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=default-k8s-diff-port-039263 minikube.k8s.io/updated_at=2023_11_08T00_18_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:45.805900   51228 ops.go:34] apiserver oom_adj: -16
	I1108 00:18:42.794167   50613 addons.go:502] enable addons completed in 2.694639707s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:18:44.953906   50613 pod_ready.go:92] pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.953928   50613 pod_ready.go:81] duration metric: took 4.533558234s waiting for pod "coredns-5dd5756b68-thtp4" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.953936   50613 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958854   50613 pod_ready.go:92] pod "etcd-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.958880   50613 pod_ready.go:81] duration metric: took 4.937561ms waiting for pod "etcd-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.958892   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964282   50613 pod_ready.go:92] pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.964305   50613 pod_ready.go:81] duration metric: took 5.40486ms waiting for pod "kube-apiserver-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.964317   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969544   50613 pod_ready.go:92] pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.969561   50613 pod_ready.go:81] duration metric: took 5.237377ms waiting for pod "kube-controller-manager-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.969568   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974340   50613 pod_ready.go:92] pod "kube-proxy-shp9z" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.974357   50613 pod_ready.go:81] duration metric: took 4.78369ms waiting for pod "kube-proxy-shp9z" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.974367   50613 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350442   50613 pod_ready.go:92] pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.350465   50613 pod_ready.go:81] duration metric: took 376.091394ms waiting for pod "kube-scheduler-embed-certs-253253" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.350473   50613 pod_ready.go:38] duration metric: took 4.950259719s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.350487   50613 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.350529   50613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.366477   50613 api_server.go:72] duration metric: took 5.203563902s to wait for apiserver process to appear ...
	I1108 00:18:45.366502   50613 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.366519   50613 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1108 00:18:45.375074   50613 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1108 00:18:45.376646   50613 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.376666   50613 api_server.go:131] duration metric: took 10.158963ms to wait for apiserver health ...
	I1108 00:18:45.376674   50613 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.554560   50613 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.554598   50613 system_pods.go:61] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.554605   50613 system_pods.go:61] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.554611   50613 system_pods.go:61] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.554618   50613 system_pods.go:61] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.554624   50613 system_pods.go:61] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.554635   50613 system_pods.go:61] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.554655   50613 system_pods.go:61] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.554697   50613 system_pods.go:61] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.554712   50613 system_pods.go:74] duration metric: took 178.032339ms to wait for pod list to return data ...
	I1108 00:18:45.554722   50613 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.750181   50613 default_sa.go:45] found service account: "default"
	I1108 00:18:45.750210   50613 default_sa.go:55] duration metric: took 195.480878ms for default service account to be created ...
	I1108 00:18:45.750220   50613 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.953261   50613 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.953303   50613 system_pods.go:89] "coredns-5dd5756b68-thtp4" [a3671b72-d562-4be2-9942-e971ee31b2c3] Running
	I1108 00:18:45.953312   50613 system_pods.go:89] "etcd-embed-certs-253253" [271bb11f-9263-43bb-a1ad-950b066f46bc] Running
	I1108 00:18:45.953320   50613 system_pods.go:89] "kube-apiserver-embed-certs-253253" [f247270e-3c67-4b37-a6ee-31934a59dd3c] Running
	I1108 00:18:45.953329   50613 system_pods.go:89] "kube-controller-manager-embed-certs-253253" [431c2e96-fff2-4076-95d4-11aa43e0d417] Running
	I1108 00:18:45.953348   50613 system_pods.go:89] "kube-proxy-shp9z" [cda240f2-977b-4318-9ee4-74f0090af489] Running
	I1108 00:18:45.953360   50613 system_pods.go:89] "kube-scheduler-embed-certs-253253" [a22238ad-7283-4dbf-8ff2-5626761a6e08] Running
	I1108 00:18:45.953375   50613 system_pods.go:89] "metrics-server-57f55c9bc5-f8rk4" [927cc877-7a22-47e3-b666-1adf0cc1b5c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.953387   50613 system_pods.go:89] "storage-provisioner" [fa05e7e5-87e7-43ac-af74-1c8a713b51c5] Running
	I1108 00:18:45.953402   50613 system_pods.go:126] duration metric: took 203.174777ms to wait for k8s-apps to be running ...
	I1108 00:18:45.953414   50613 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.953471   50613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.969669   50613 system_svc.go:56] duration metric: took 16.24852ms WaitForService to wait for kubelet.
	I1108 00:18:45.969698   50613 kubeadm.go:581] duration metric: took 5.806787278s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.969720   50613 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.150807   50613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.150839   50613 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.150853   50613 node_conditions.go:105] duration metric: took 181.127043ms to run NodePressure ...
	I1108 00:18:46.150866   50613 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.150876   50613 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.150886   50613 start.go:242] writing updated cluster config ...
	I1108 00:18:46.151185   50613 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.209047   50613 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.211074   50613 out.go:177] * Done! kubectl is now configured to use "embed-certs-253253" cluster and "default" namespace by default
	I1108 00:18:44.564102   50505 pod_ready.go:97] pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerS
tateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564132   50505 pod_ready.go:81] duration metric: took 13.91522436s waiting for pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace to be "Ready" ...
	E1108 00:18:44.564147   50505 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-l9prx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.61.176 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runni
ng:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-11-08 00:18:33 +0000 UTC,FinishedAt:2023-11-08 00:18:43 +0000 UTC,ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://4ffd62a60718dd1c6133afefc215085069920afc1cca2f055336a977765569cb Started:0xc0035e3d00 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:18:44.564158   50505 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573431   50505 pod_ready.go:92] pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.573462   50505 pod_ready.go:81] duration metric: took 9.295648ms waiting for pod "coredns-5dd5756b68-vl7nr" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.573473   50505 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580792   50505 pod_ready.go:92] pod "etcd-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.580828   50505 pod_ready.go:81] duration metric: took 7.346504ms waiting for pod "etcd-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.580840   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587095   50505 pod_ready.go:92] pod "kube-apiserver-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.587117   50505 pod_ready.go:81] duration metric: took 6.268891ms waiting for pod "kube-apiserver-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.587130   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594022   50505 pod_ready.go:92] pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.594039   50505 pod_ready.go:81] duration metric: took 6.901477ms waiting for pod "kube-controller-manager-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.594052   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960144   50505 pod_ready.go:92] pod "kube-proxy-m6k8g" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:44.960162   50505 pod_ready.go:81] duration metric: took 366.102529ms waiting for pod "kube-proxy-m6k8g" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:44.960173   50505 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361366   50505 pod_ready.go:92] pod "kube-scheduler-no-preload-320390" in "kube-system" namespace has status "Ready":"True"
	I1108 00:18:45.361388   50505 pod_ready.go:81] duration metric: took 401.208779ms waiting for pod "kube-scheduler-no-preload-320390" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:45.361396   50505 pod_ready.go:38] duration metric: took 14.818791823s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:45.361408   50505 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:18:45.361453   50505 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:18:45.377632   50505 api_server.go:72] duration metric: took 15.078013421s to wait for apiserver process to appear ...
	I1108 00:18:45.377656   50505 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:18:45.377673   50505 api_server.go:253] Checking apiserver healthz at https://192.168.61.176:8443/healthz ...
	I1108 00:18:45.383912   50505 api_server.go:279] https://192.168.61.176:8443/healthz returned 200:
	ok
	I1108 00:18:45.385131   50505 api_server.go:141] control plane version: v1.28.3
	I1108 00:18:45.385153   50505 api_server.go:131] duration metric: took 7.489916ms to wait for apiserver health ...
	I1108 00:18:45.385163   50505 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:18:45.565081   50505 system_pods.go:59] 8 kube-system pods found
	I1108 00:18:45.565112   50505 system_pods.go:61] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.565120   50505 system_pods.go:61] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.565127   50505 system_pods.go:61] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.565134   50505 system_pods.go:61] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.565141   50505 system_pods.go:61] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.565149   50505 system_pods.go:61] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.565157   50505 system_pods.go:61] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.565171   50505 system_pods.go:61] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.565185   50505 system_pods.go:74] duration metric: took 180.015317ms to wait for pod list to return data ...
	I1108 00:18:45.565196   50505 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:18:45.760190   50505 default_sa.go:45] found service account: "default"
	I1108 00:18:45.760217   50505 default_sa.go:55] duration metric: took 195.014175ms for default service account to be created ...
	I1108 00:18:45.760227   50505 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:18:45.966186   50505 system_pods.go:86] 8 kube-system pods found
	I1108 00:18:45.966223   50505 system_pods.go:89] "coredns-5dd5756b68-vl7nr" [4c6d5125-ebac-4931-9af7-045d1c4ba2b1] Running
	I1108 00:18:45.966231   50505 system_pods.go:89] "etcd-no-preload-320390" [fed32a26-d2ab-4470-b424-cc123c0afdf2] Running
	I1108 00:18:45.966239   50505 system_pods.go:89] "kube-apiserver-no-preload-320390" [4cc8b2c1-0f11-4fa9-ab08-0b6039e98b08] Running
	I1108 00:18:45.966245   50505 system_pods.go:89] "kube-controller-manager-no-preload-320390" [028b3d4e-ab62-44c3-b78e-268012d13db3] Running
	I1108 00:18:45.966252   50505 system_pods.go:89] "kube-proxy-m6k8g" [60b019bf-527c-4265-a67c-31e6cf377039] Running
	I1108 00:18:45.966259   50505 system_pods.go:89] "kube-scheduler-no-preload-320390" [c9c606b6-8188-4918-a5c6-cdc845ca5fb4] Running
	I1108 00:18:45.966268   50505 system_pods.go:89] "metrics-server-57f55c9bc5-n49bz" [26c5310d-c29f-476a-a520-bd693143e248] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:18:45.966279   50505 system_pods.go:89] "storage-provisioner" [bdba396c-182a-4bef-8ccb-2275534d89c8] Running
	I1108 00:18:45.966294   50505 system_pods.go:126] duration metric: took 206.05956ms to wait for k8s-apps to be running ...
	I1108 00:18:45.966305   50505 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:18:45.966355   50505 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:45.984753   50505 system_svc.go:56] duration metric: took 18.427005ms WaitForService to wait for kubelet.
	I1108 00:18:45.984781   50505 kubeadm.go:581] duration metric: took 15.685164805s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:18:45.984803   50505 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:18:46.159568   50505 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:18:46.159602   50505 node_conditions.go:123] node cpu capacity is 2
	I1108 00:18:46.159615   50505 node_conditions.go:105] duration metric: took 174.805156ms to run NodePressure ...
	I1108 00:18:46.159627   50505 start.go:228] waiting for startup goroutines ...
	I1108 00:18:46.159636   50505 start.go:233] waiting for cluster config update ...
	I1108 00:18:46.159649   50505 start.go:242] writing updated cluster config ...
	I1108 00:18:46.159934   50505 ssh_runner.go:195] Run: rm -f paused
	I1108 00:18:46.220234   50505 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:18:46.222217   50505 out.go:177] * Done! kubectl is now configured to use "no-preload-320390" cluster and "default" namespace by default
	I1108 00:18:46.222047   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:48.714709   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:46.109921   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.223968   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:46.849987   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.349982   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:47.850871   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.350081   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:48.850494   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.350809   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:49.850515   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.350227   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.850044   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:50.714976   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:53.214612   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:51.350594   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:51.850705   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.349971   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:52.850530   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.350696   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:53.850039   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.350523   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:54.849805   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.350560   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:55.849890   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.350679   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:56.849863   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.350004   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:57.850463   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.349999   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.850810   51228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:18:58.958213   51228 kubeadm.go:1081] duration metric: took 13.258132625s to wait for elevateKubeSystemPrivileges.
	I1108 00:18:58.958253   51228 kubeadm.go:406] StartCluster complete in 5m8.559036824s
	I1108 00:18:58.958281   51228 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.958371   51228 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:18:58.960083   51228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:18:58.960306   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:18:58.960417   51228 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:18:58.960497   51228 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960505   51228 config.go:182] Loaded profile config "default-k8s-diff-port-039263": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:18:58.960517   51228 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960544   51228 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-039263"
	I1108 00:18:58.960521   51228 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-039263"
	I1108 00:18:58.960538   51228 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-039263"
	I1108 00:18:58.960588   51228 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.960607   51228 addons.go:240] addon metrics-server should already be in state true
	I1108 00:18:58.960654   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	W1108 00:18:58.960566   51228 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:18:58.960732   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.961043   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961079   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961112   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961115   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.961155   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.961164   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.980365   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I1108 00:18:58.980386   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I1108 00:18:58.980512   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I1108 00:18:58.980860   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980912   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.980863   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:58.981328   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981350   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981457   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981466   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:58.981477   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981483   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981861   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.981863   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:58.982023   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:58.982419   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982429   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.982447   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.982464   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.985852   51228 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-039263"
	W1108 00:18:58.985875   51228 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:18:58.985902   51228 host.go:66] Checking if "default-k8s-diff-port-039263" exists ...
	I1108 00:18:58.986359   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:58.986390   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:58.996161   51228 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-039263" context rescaled to 1 replicas
	I1108 00:18:58.996200   51228 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.116 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:18:58.998257   51228 out.go:177] * Verifying Kubernetes components...
	I1108 00:18:58.999857   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:18:58.999917   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I1108 00:18:58.998777   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I1108 00:18:59.000380   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001040   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001093   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001205   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.001478   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.001674   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.001690   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.001762   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.002038   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.002209   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.003822   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006057   51228 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:18:59.004254   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.006174   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I1108 00:18:59.007678   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:18:59.007688   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:18:59.007706   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.009545   51228 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:18:55.714548   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:57.715173   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:18:59.007989   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.010470   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.010632   51228 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.010640   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:18:59.010653   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.011015   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.011039   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.011227   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.011250   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.011650   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.011657   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.012158   51228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:18:59.012188   51228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:18:59.012671   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.012805   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.012925   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.013938   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014329   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.014348   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.014493   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.014645   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.014770   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.014879   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.030160   51228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I1108 00:18:59.030558   51228 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:18:59.031087   51228 main.go:141] libmachine: Using API Version  1
	I1108 00:18:59.031101   51228 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:18:59.031353   51228 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:18:59.031558   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetState
	I1108 00:18:59.033203   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .DriverName
	I1108 00:18:59.033540   51228 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.033556   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:18:59.033573   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHHostname
	I1108 00:18:59.036749   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037158   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:72:05", ip: ""} in network mk-default-k8s-diff-port-039263: {Iface:virbr2 ExpiryTime:2023-11-08 01:13:32 +0000 UTC Type:0 Mac:52:54:00:aa:72:05 Iaid: IPaddr:192.168.72.116 Prefix:24 Hostname:default-k8s-diff-port-039263 Clientid:01:52:54:00:aa:72:05}
	I1108 00:18:59.037177   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | domain default-k8s-diff-port-039263 has defined IP address 192.168.72.116 and MAC address 52:54:00:aa:72:05 in network mk-default-k8s-diff-port-039263
	I1108 00:18:59.037364   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHPort
	I1108 00:18:59.037551   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHKeyPath
	I1108 00:18:59.037684   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .GetSSHUsername
	I1108 00:18:59.037791   51228 sshutil.go:53] new ssh client: &{IP:192.168.72.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/default-k8s-diff-port-039263/id_rsa Username:docker}
	I1108 00:18:59.349254   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:18:59.451588   51228 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.451664   51228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:18:59.464584   51228 node_ready.go:49] node "default-k8s-diff-port-039263" has status "Ready":"True"
	I1108 00:18:59.464616   51228 node_ready.go:38] duration metric: took 12.97792ms waiting for node "default-k8s-diff-port-039263" to be "Ready" ...
	I1108 00:18:59.464629   51228 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:18:59.475428   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	I1108 00:18:59.481740   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:18:59.483627   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:18:59.483644   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:18:59.599214   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:18:59.599244   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:18:59.661512   51228 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:18:59.661537   51228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:18:59.726775   51228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:01.455332   51228 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.003642063s)
	I1108 00:19:01.455368   51228 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1108 00:19:01.455575   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.106281369s)
	I1108 00:19:01.455635   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.455659   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.455957   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456004   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456026   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.456048   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.456296   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.456332   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.456339   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.485941   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.485970   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.486229   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.486287   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.486294   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.599500   51228 pod_ready.go:102] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:01.893463   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.411687372s)
	I1108 00:19:01.893518   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893530   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.893844   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.893887   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:01.893904   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:01.893918   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:01.893928   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:01.894199   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:01.894215   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.421714   51228 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694889947s)
	I1108 00:19:02.421768   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.421785   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422098   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422123   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422141   51228 main.go:141] libmachine: Making call to close driver server
	I1108 00:19:02.422160   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) Calling .Close
	I1108 00:19:02.422138   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422425   51228 main.go:141] libmachine: (default-k8s-diff-port-039263) DBG | Closing plugin on server side
	I1108 00:19:02.422467   51228 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:19:02.422480   51228 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:19:02.422492   51228 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-039263"
	I1108 00:19:02.424446   51228 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1108 00:18:59.715708   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.214990   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:02.426041   51228 addons.go:502] enable addons completed in 3.465624772s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1108 00:19:02.549025   51228 pod_ready.go:97] pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:
2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549056   51228 pod_ready.go:81] duration metric: took 3.073604936s waiting for pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:02.549069   51228 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-7ktrv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-08 00:18:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.72.116 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-11-08 00:18:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&Conta
inerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-11-08 00:19:01 +0000 UTC,FinishedAt:2023-11-08 00:19:01 +0000 UTC,ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://31fbf2f57498e1f90b02c6fd31ebc03a12f99cb350d5e2c4e6eb7ae3b30853b9 Started:0xc0030b331c AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1108 00:19:02.549076   51228 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096421   51228 pod_ready.go:92] pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.096449   51228 pod_ready.go:81] duration metric: took 547.365037ms waiting for pod "coredns-5dd5756b68-tt9sm" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.096461   51228 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104473   51228 pod_ready.go:92] pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.104497   51228 pod_ready.go:81] duration metric: took 8.028055ms waiting for pod "etcd-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.104509   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108940   51228 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.108965   51228 pod_ready.go:81] duration metric: took 4.447315ms waiting for pod "kube-apiserver-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.108976   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458803   51228 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:03.458831   51228 pod_ready.go:81] duration metric: took 349.845574ms waiting for pod "kube-controller-manager-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:03.458844   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256435   51228 pod_ready.go:92] pod "kube-proxy-rhdhg" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.256457   51228 pod_ready.go:81] duration metric: took 797.605956ms waiting for pod "kube-proxy-rhdhg" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.256466   51228 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655727   51228 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace has status "Ready":"True"
	I1108 00:19:04.655750   51228 pod_ready.go:81] duration metric: took 399.277263ms waiting for pod "kube-scheduler-default-k8s-diff-port-039263" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:04.655758   51228 pod_ready.go:38] duration metric: took 5.191103655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:04.655772   51228 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:19:04.655823   51228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:19:04.671030   51228 api_server.go:72] duration metric: took 5.674798555s to wait for apiserver process to appear ...
	I1108 00:19:04.671059   51228 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:19:04.671076   51228 api_server.go:253] Checking apiserver healthz at https://192.168.72.116:8444/healthz ...
	I1108 00:19:04.677315   51228 api_server.go:279] https://192.168.72.116:8444/healthz returned 200:
	ok
	I1108 00:19:04.678430   51228 api_server.go:141] control plane version: v1.28.3
	I1108 00:19:04.678451   51228 api_server.go:131] duration metric: took 7.384898ms to wait for apiserver health ...
	I1108 00:19:04.678457   51228 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:19:04.866585   51228 system_pods.go:59] 8 kube-system pods found
	I1108 00:19:04.866617   51228 system_pods.go:61] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:04.866622   51228 system_pods.go:61] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:04.866626   51228 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:04.866631   51228 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:04.866635   51228 system_pods.go:61] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:04.866639   51228 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:04.866666   51228 system_pods.go:61] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:04.866676   51228 system_pods.go:61] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:04.866684   51228 system_pods.go:74] duration metric: took 188.222131ms to wait for pod list to return data ...
	I1108 00:19:04.866691   51228 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:19:05.056224   51228 default_sa.go:45] found service account: "default"
	I1108 00:19:05.056251   51228 default_sa.go:55] duration metric: took 189.551289ms for default service account to be created ...
	I1108 00:19:05.056263   51228 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:19:05.259774   51228 system_pods.go:86] 8 kube-system pods found
	I1108 00:19:05.259800   51228 system_pods.go:89] "coredns-5dd5756b68-tt9sm" [964a0552-9be0-4dbb-9a2f-0be3c93b8f83] Running
	I1108 00:19:05.259805   51228 system_pods.go:89] "etcd-default-k8s-diff-port-039263" [36863807-9899-4a8e-9a18-e3d938be8e8a] Running
	I1108 00:19:05.259810   51228 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-039263" [88677a44-54e3-41d7-8395-7616396a52d4] Running
	I1108 00:19:05.259814   51228 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-039263" [61a04987-85c4-462c-a4a7-1438c079b72b] Running
	I1108 00:19:05.259818   51228 system_pods.go:89] "kube-proxy-rhdhg" [405b26b9-e6b3-440d-8f28-60db650079a8] Running
	I1108 00:19:05.259822   51228 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-039263" [2a36824a-77da-4a54-94f4-484452f1b714] Running
	I1108 00:19:05.259828   51228 system_pods.go:89] "metrics-server-57f55c9bc5-j6t7g" [5c0e827c-8281-4b51-b0c7-d43d0aa22e29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:19:05.259832   51228 system_pods.go:89] "storage-provisioner" [4cace2ff-d7cd-4d31-9f11-d410bc675cbf] Running
	I1108 00:19:05.259840   51228 system_pods.go:126] duration metric: took 203.572791ms to wait for k8s-apps to be running ...
	I1108 00:19:05.259846   51228 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:19:05.259889   51228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:05.274254   51228 system_svc.go:56] duration metric: took 14.400341ms WaitForService to wait for kubelet.
	I1108 00:19:05.274277   51228 kubeadm.go:581] duration metric: took 6.278053459s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:19:05.274304   51228 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:19:05.457057   51228 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:19:05.457086   51228 node_conditions.go:123] node cpu capacity is 2
	I1108 00:19:05.457097   51228 node_conditions.go:105] duration metric: took 182.787127ms to run NodePressure ...
	I1108 00:19:05.457107   51228 start.go:228] waiting for startup goroutines ...
	I1108 00:19:05.457113   51228 start.go:233] waiting for cluster config update ...
	I1108 00:19:05.457122   51228 start.go:242] writing updated cluster config ...
	I1108 00:19:05.457358   51228 ssh_runner.go:195] Run: rm -f paused
	I1108 00:19:05.507414   51228 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1108 00:19:05.509695   51228 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-039263" cluster and "default" namespace by default
	I1108 00:19:04.715259   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:07.214815   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:09.214886   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:11.715679   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:14.215690   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:16.716315   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:19.215323   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:21.715872   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:24.215543   50022 pod_ready.go:102] pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace has status "Ready":"False"
	I1108 00:19:26.409609   50022 pod_ready.go:81] duration metric: took 4m0.000552573s waiting for pod "metrics-server-74d5856cc6-ghpjp" in "kube-system" namespace to be "Ready" ...
	E1108 00:19:26.409644   50022 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1108 00:19:26.409659   50022 pod_ready.go:38] duration metric: took 4m1.201158343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:26.409684   50022 kubeadm.go:640] restartCluster took 5m11.212754497s
	W1108 00:19:26.409757   50022 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1108 00:19:26.409790   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1108 00:19:31.401367   50022 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.991549602s)
	I1108 00:19:31.401473   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:31.415823   50022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 00:19:31.425384   50022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 00:19:31.435585   50022 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 00:19:31.435635   50022 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1108 00:19:31.492015   50022 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1108 00:19:31.492120   50022 kubeadm.go:322] [preflight] Running pre-flight checks
	I1108 00:19:31.649293   50022 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 00:19:31.649437   50022 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 00:19:31.649605   50022 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1108 00:19:31.886799   50022 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 00:19:31.886955   50022 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 00:19:31.896062   50022 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1108 00:19:32.038269   50022 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 00:19:32.040677   50022 out.go:204]   - Generating certificates and keys ...
	I1108 00:19:32.040833   50022 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1108 00:19:32.040945   50022 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1108 00:19:32.041037   50022 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1108 00:19:32.041085   50022 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1108 00:19:32.041142   50022 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1108 00:19:32.041231   50022 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1108 00:19:32.041346   50022 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1108 00:19:32.041441   50022 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1108 00:19:32.041594   50022 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1108 00:19:32.042173   50022 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1108 00:19:32.042236   50022 kubeadm.go:322] [certs] Using the existing "sa" key
	I1108 00:19:32.042302   50022 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 00:19:32.325005   50022 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 00:19:32.544755   50022 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 00:19:32.726539   50022 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 00:19:32.905403   50022 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 00:19:32.906525   50022 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 00:19:32.908371   50022 out.go:204]   - Booting up control plane ...
	I1108 00:19:32.908514   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 00:19:32.919163   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 00:19:32.919256   50022 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 00:19:32.919387   50022 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 00:19:32.928261   50022 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 00:19:42.937037   50022 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.006146 seconds
	I1108 00:19:42.937215   50022 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 00:19:42.955795   50022 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 00:19:43.479726   50022 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 00:19:43.479868   50022 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-590541 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1108 00:19:43.989897   50022 kubeadm.go:322] [bootstrap-token] Using token: rpiq38.6eoemv6ygv6ghnel
	I1108 00:19:43.991262   50022 out.go:204]   - Configuring RBAC rules ...
	I1108 00:19:43.991391   50022 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 00:19:44.001502   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 00:19:44.006931   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 00:19:44.012505   50022 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 00:19:44.021422   50022 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 00:19:44.111517   50022 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1108 00:19:44.412934   50022 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1108 00:19:44.412985   50022 kubeadm.go:322] 
	I1108 00:19:44.413073   50022 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1108 00:19:44.413088   50022 kubeadm.go:322] 
	I1108 00:19:44.413186   50022 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1108 00:19:44.413196   50022 kubeadm.go:322] 
	I1108 00:19:44.413230   50022 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1108 00:19:44.413317   50022 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 00:19:44.413388   50022 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 00:19:44.413398   50022 kubeadm.go:322] 
	I1108 00:19:44.413489   50022 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1108 00:19:44.413608   50022 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 00:19:44.413704   50022 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 00:19:44.413720   50022 kubeadm.go:322] 
	I1108 00:19:44.413851   50022 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1108 00:19:44.413974   50022 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1108 00:19:44.413988   50022 kubeadm.go:322] 
	I1108 00:19:44.414090   50022 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414288   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 \
	I1108 00:19:44.414337   50022 kubeadm.go:322]     --control-plane 	  
	I1108 00:19:44.414347   50022 kubeadm.go:322] 
	I1108 00:19:44.414458   50022 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1108 00:19:44.414474   50022 kubeadm.go:322] 
	I1108 00:19:44.414593   50022 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rpiq38.6eoemv6ygv6ghnel \
	I1108 00:19:44.414754   50022 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a1c46ba0eec310eacb69a4c2d9262dcad5bd9af8aef0022b80b6505310b22713 
	I1108 00:19:44.416038   50022 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 00:19:44.416063   50022 cni.go:84] Creating CNI manager for ""
	I1108 00:19:44.416073   50022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1108 00:19:44.417877   50022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 00:19:44.419195   50022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 00:19:44.448380   50022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 00:19:44.474228   50022 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 00:19:44.474339   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.474380   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=old-k8s-version-590541 minikube.k8s.io/updated_at=2023_11_08T00_19_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.739449   50022 ops.go:34] apiserver oom_adj: -16
	I1108 00:19:44.739605   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:44.848712   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.444347   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:45.944721   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.444140   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:46.944185   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.444342   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:47.944227   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.443941   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:48.944002   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.444440   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:49.943801   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.444481   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:50.944720   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.443857   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:51.943755   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.444663   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:52.944052   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.443917   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:53.943763   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.443886   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:54.944615   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.444156   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:55.944693   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.443823   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:56.944727   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.444188   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:57.943966   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.444659   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:58.944651   50022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 00:19:59.061808   50022 kubeadm.go:1081] duration metric: took 14.587519972s to wait for elevateKubeSystemPrivileges.
	I1108 00:19:59.061855   50022 kubeadm.go:406] StartCluster complete in 5m43.925088245s
	I1108 00:19:59.061878   50022 settings.go:142] acquiring lock: {Name:mk24113e0811d0822c92609e9886aa6fa175d90c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.061962   50022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:19:59.063740   50022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/kubeconfig: {Name:mk153c95cf832ad410a2c28062b4e7cc54043ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 00:19:59.064004   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 00:19:59.064107   50022 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 00:19:59.064182   50022 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064198   50022 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064213   50022 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-590541"
	W1108 00:19:59.064222   50022 addons.go:240] addon storage-provisioner should already be in state true
	I1108 00:19:59.064224   50022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-590541"
	I1108 00:19:59.064233   50022 config.go:182] Loaded profile config "old-k8s-version-590541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1108 00:19:59.064236   50022 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-590541"
	I1108 00:19:59.064260   50022 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:19:59.064265   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	W1108 00:19:59.064274   50022 addons.go:240] addon metrics-server should already be in state true
	I1108 00:19:59.064406   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.064720   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064757   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.064761   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.064797   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.065271   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.065309   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.082041   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
	I1108 00:19:59.082534   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.083051   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.083075   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.083432   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.083970   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.084022   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.084099   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I1108 00:19:59.084222   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I1108 00:19:59.084440   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084605   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.084870   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.084887   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085151   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.085174   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.085248   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.085427   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.085480   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.086399   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.086442   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.090677   50022 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-590541"
	W1108 00:19:59.090700   50022 addons.go:240] addon default-storageclass should already be in state true
	I1108 00:19:59.090728   50022 host.go:66] Checking if "old-k8s-version-590541" exists ...
	I1108 00:19:59.091092   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.091130   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.101788   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I1108 00:19:59.102208   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.102631   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.102648   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.103029   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.103219   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.104809   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I1108 00:19:59.104937   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.106844   50022 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1108 00:19:59.105475   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.108350   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 00:19:59.108374   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 00:19:59.108403   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.108551   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I1108 00:19:59.108910   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.108930   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.109878   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.109881   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.110039   50022 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-590541" context rescaled to 1 replicas
	I1108 00:19:59.110075   50022 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.49 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 00:19:59.111637   50022 out.go:177] * Verifying Kubernetes components...
	I1108 00:19:59.110208   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.110398   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.113108   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.113220   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:19:59.113743   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.113792   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.114471   50022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1108 00:19:59.114510   50022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 00:19:59.115179   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.117011   50022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 00:19:59.115897   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.116172   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.118325   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.118358   50022 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.118370   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 00:19:59.118383   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.118504   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.118696   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.118854   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.120889   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121255   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.121280   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.121465   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.121647   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.121783   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.121868   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.135569   50022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I1108 00:19:59.135977   50022 main.go:141] libmachine: () Calling .GetVersion
	I1108 00:19:59.136428   50022 main.go:141] libmachine: Using API Version  1
	I1108 00:19:59.136441   50022 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 00:19:59.136799   50022 main.go:141] libmachine: () Calling .GetMachineName
	I1108 00:19:59.137027   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetState
	I1108 00:19:59.138503   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .DriverName
	I1108 00:19:59.138735   50022 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.138745   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 00:19:59.138758   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHHostname
	I1108 00:19:59.141494   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.141870   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:aa:82", ip: ""} in network mk-old-k8s-version-590541: {Iface:virbr4 ExpiryTime:2023-11-08 01:13:56 +0000 UTC Type:0 Mac:52:54:00:3c:aa:82 Iaid: IPaddr:192.168.50.49 Prefix:24 Hostname:old-k8s-version-590541 Clientid:01:52:54:00:3c:aa:82}
	I1108 00:19:59.141895   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | domain old-k8s-version-590541 has defined IP address 192.168.50.49 and MAC address 52:54:00:3c:aa:82 in network mk-old-k8s-version-590541
	I1108 00:19:59.142046   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHPort
	I1108 00:19:59.142248   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHKeyPath
	I1108 00:19:59.142370   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .GetSSHUsername
	I1108 00:19:59.142592   50022 sshutil.go:53] new ssh client: &{IP:192.168.50.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/old-k8s-version-590541/id_rsa Username:docker}
	I1108 00:19:59.281321   50022 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.281572   50022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 00:19:59.284783   50022 node_ready.go:49] node "old-k8s-version-590541" has status "Ready":"True"
	I1108 00:19:59.284804   50022 node_ready.go:38] duration metric: took 3.444344ms waiting for node "old-k8s-version-590541" to be "Ready" ...
	I1108 00:19:59.284830   50022 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:19:59.290322   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	I1108 00:19:59.290908   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 00:19:59.290925   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1108 00:19:59.311485   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 00:19:59.346809   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 00:19:59.350361   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 00:19:59.350385   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 00:19:59.403305   50022 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:19:59.403328   50022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 00:19:59.479823   50022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 00:20:00.224554   50022 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1108 00:20:00.659427   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.347903115s)
	I1108 00:20:00.659441   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.312604515s)
	I1108 00:20:00.659501   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659533   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659536   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659549   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659834   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.659857   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.659867   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.659876   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.659933   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.659981   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660022   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660051   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.660062   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.660131   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.660242   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660254   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.660300   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.660321   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.851614   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.851637   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.851930   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.851996   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.852027   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992341   50022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.5124613s)
	I1108 00:20:00.992412   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992429   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.992774   50022 main.go:141] libmachine: (old-k8s-version-590541) DBG | Closing plugin on server side
	I1108 00:20:00.992811   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.992830   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.992841   50022 main.go:141] libmachine: Making call to close driver server
	I1108 00:20:00.992854   50022 main.go:141] libmachine: (old-k8s-version-590541) Calling .Close
	I1108 00:20:00.993100   50022 main.go:141] libmachine: Successfully made call to close driver server
	I1108 00:20:00.993122   50022 main.go:141] libmachine: Making call to close connection to plugin binary
	I1108 00:20:00.993162   50022 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-590541"
	I1108 00:20:00.995051   50022 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1108 00:20:00.996839   50022 addons.go:502] enable addons completed in 1.932740124s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1108 00:20:01.324759   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:03.823744   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:06.322994   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:08.822755   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:10.823247   50022 pod_ready.go:102] pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace has status "Ready":"False"
	I1108 00:20:12.819017   50022 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819052   50022 pod_ready.go:81] duration metric: took 13.528699598s waiting for pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace to be "Ready" ...
	E1108 00:20:12.819067   50022 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-979rq" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-979rq" not found
	I1108 00:20:12.819075   50022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825970   50022 pod_ready.go:92] pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.825988   50022 pod_ready.go:81] duration metric: took 6.906077ms waiting for pod "coredns-5644d7b6d9-tbfp7" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.825996   50022 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830826   50022 pod_ready.go:92] pod "kube-proxy-p27g4" in "kube-system" namespace has status "Ready":"True"
	I1108 00:20:12.830843   50022 pod_ready.go:81] duration metric: took 4.841517ms waiting for pod "kube-proxy-p27g4" in "kube-system" namespace to be "Ready" ...
	I1108 00:20:12.830852   50022 pod_ready.go:38] duration metric: took 13.54601076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 00:20:12.830866   50022 api_server.go:52] waiting for apiserver process to appear ...
	I1108 00:20:12.830909   50022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 00:20:12.849600   50022 api_server.go:72] duration metric: took 13.739491815s to wait for apiserver process to appear ...
	I1108 00:20:12.849634   50022 api_server.go:88] waiting for apiserver healthz status ...
	I1108 00:20:12.849653   50022 api_server.go:253] Checking apiserver healthz at https://192.168.50.49:8443/healthz ...
	I1108 00:20:12.856740   50022 api_server.go:279] https://192.168.50.49:8443/healthz returned 200:
	ok
	I1108 00:20:12.857940   50022 api_server.go:141] control plane version: v1.16.0
	I1108 00:20:12.857960   50022 api_server.go:131] duration metric: took 8.319568ms to wait for apiserver health ...
	I1108 00:20:12.857967   50022 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 00:20:12.862192   50022 system_pods.go:59] 4 kube-system pods found
	I1108 00:20:12.862217   50022 system_pods.go:61] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.862222   50022 system_pods.go:61] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.862230   50022 system_pods.go:61] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.862239   50022 system_pods.go:61] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.862248   50022 system_pods.go:74] duration metric: took 4.275078ms to wait for pod list to return data ...
	I1108 00:20:12.862257   50022 default_sa.go:34] waiting for default service account to be created ...
	I1108 00:20:12.867018   50022 default_sa.go:45] found service account: "default"
	I1108 00:20:12.867043   50022 default_sa.go:55] duration metric: took 4.778337ms for default service account to be created ...
	I1108 00:20:12.867052   50022 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 00:20:12.871638   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:12.871664   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:12.871671   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:12.871682   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:12.871688   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:12.871706   50022 retry.go:31] will retry after 307.408821ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.184897   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.184927   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.184944   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.184954   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.184963   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.184984   50022 retry.go:31] will retry after 301.786347ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.492026   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.492053   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.492058   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.492065   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.492070   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.492085   50022 retry.go:31] will retry after 396.219719ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:13.893320   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:13.893348   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:13.893356   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:13.893366   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:13.893372   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:13.893390   50022 retry.go:31] will retry after 592.540002ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:14.490613   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:14.490638   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:14.490644   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:14.490651   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:14.490655   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:14.490670   50022 retry.go:31] will retry after 512.19038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.008506   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.008533   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.008539   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.008545   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.008586   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.008606   50022 retry.go:31] will retry after 704.779032ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:15.719115   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:15.719140   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:15.719145   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:15.719152   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:15.719156   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:15.719174   50022 retry.go:31] will retry after 892.457504ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:16.616738   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:16.616764   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:16.616770   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:16.616776   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:16.616781   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:16.616795   50022 retry.go:31] will retry after 1.107800827s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:17.729962   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:17.729989   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:17.729997   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:17.730007   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:17.730014   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:17.730032   50022 retry.go:31] will retry after 1.24176205s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:18.976866   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:18.976891   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:18.976897   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:18.976905   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:18.976910   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:18.976925   50022 retry.go:31] will retry after 1.449825188s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:20.432723   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:20.432753   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:20.432760   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:20.432770   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:20.432776   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:20.432796   50022 retry.go:31] will retry after 1.764186569s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:22.202432   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:22.202465   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:22.202473   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:22.202484   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:22.202491   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:22.202522   50022 retry.go:31] will retry after 3.392893976s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:25.600685   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:25.600712   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:25.600717   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:25.600723   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:25.600728   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:25.600743   50022 retry.go:31] will retry after 3.537590817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:29.143439   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:29.143464   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:29.143468   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:29.143475   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:29.143482   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:29.143502   50022 retry.go:31] will retry after 3.82527374s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:32.973763   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:32.973796   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:32.973804   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:32.973814   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:32.973821   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:32.973840   50022 retry.go:31] will retry after 6.225201923s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:39.204648   50022 system_pods.go:86] 4 kube-system pods found
	I1108 00:20:39.204682   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:39.204690   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:39.204702   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:39.204710   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:39.204729   50022 retry.go:31] will retry after 7.177772259s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:46.388992   50022 system_pods.go:86] 5 kube-system pods found
	I1108 00:20:46.389016   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:46.389022   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Pending
	I1108 00:20:46.389025   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:46.389032   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:46.389037   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:46.389052   50022 retry.go:31] will retry after 8.995080935s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1108 00:20:55.391202   50022 system_pods.go:86] 7 kube-system pods found
	I1108 00:20:55.391228   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:20:55.391233   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:20:55.391237   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:20:55.391241   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:20:55.391245   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Pending
	I1108 00:20:55.391252   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:20:55.391256   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:20:55.391272   50022 retry.go:31] will retry after 10.028239262s: missing components: kube-controller-manager, kube-scheduler
	I1108 00:21:05.426292   50022 system_pods.go:86] 8 kube-system pods found
	I1108 00:21:05.426317   50022 system_pods.go:89] "coredns-5644d7b6d9-tbfp7" [af8ab5b9-9401-4755-86af-663236159220] Running
	I1108 00:21:05.426323   50022 system_pods.go:89] "etcd-old-k8s-version-590541" [0efed662-1891-4909-9452-76ec2984dbe2] Running
	I1108 00:21:05.426327   50022 system_pods.go:89] "kube-apiserver-old-k8s-version-590541" [87b2cf34-c41c-47e0-9042-75cc9f45a3c5] Running
	I1108 00:21:05.426331   50022 system_pods.go:89] "kube-controller-manager-old-k8s-version-590541" [90563d50-3d48-4256-ae70-82a2a6d1c251] Running
	I1108 00:21:05.426335   50022 system_pods.go:89] "kube-proxy-p27g4" [a2474fe2-c0f8-42a0-b276-56ff1113cac5] Running
	I1108 00:21:05.426339   50022 system_pods.go:89] "kube-scheduler-old-k8s-version-590541" [a722f002-c4ab-467a-810a-20cf46a13211] Running
	I1108 00:21:05.426345   50022 system_pods.go:89] "metrics-server-74d5856cc6-b4rtb" [bfd72ad0-3c33-4a96-88b1-f18bc20b224c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 00:21:05.426349   50022 system_pods.go:89] "storage-provisioner" [e23d9653-c31d-4713-be02-30b067b1b6aa] Running
	I1108 00:21:05.426356   50022 system_pods.go:126] duration metric: took 52.559298515s to wait for k8s-apps to be running ...
	I1108 00:21:05.426363   50022 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 00:21:05.426403   50022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 00:21:05.443281   50022 system_svc.go:56] duration metric: took 16.903571ms WaitForService to wait for kubelet.
	I1108 00:21:05.443315   50022 kubeadm.go:581] duration metric: took 1m6.333213694s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1108 00:21:05.443337   50022 node_conditions.go:102] verifying NodePressure condition ...
	I1108 00:21:05.447040   50022 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 00:21:05.447064   50022 node_conditions.go:123] node cpu capacity is 2
	I1108 00:21:05.447074   50022 node_conditions.go:105] duration metric: took 3.731788ms to run NodePressure ...
	I1108 00:21:05.447083   50022 start.go:228] waiting for startup goroutines ...
	I1108 00:21:05.447089   50022 start.go:233] waiting for cluster config update ...
	I1108 00:21:05.447098   50022 start.go:242] writing updated cluster config ...
	I1108 00:21:05.447409   50022 ssh_runner.go:195] Run: rm -f paused
	I1108 00:21:05.496203   50022 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1108 00:21:05.498233   50022 out.go:177] 
	W1108 00:21:05.499660   50022 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1108 00:21:05.500985   50022 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1108 00:21:05.502464   50022 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-590541" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-11-08 00:13:55 UTC, ends at Wed 2023-11-08 00:32:17 UTC. --
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.521989534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403537521978180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=4d3ee6e9-adbf-43a0-9dda-24967683dc4c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.522455277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e5a12b68-544b-4650-a98b-fd60c0412461 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.522593313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e5a12b68-544b-4650-a98b-fd60c0412461 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.522756593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d,PodSandboxId:7676c112a35a1d2ba86064ddc0f5c70700c18e8b67ed70907aa4dfa91d0ef49f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402801679591587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23d9653-c31d-4713-be02-30b067b1b6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 574f188d,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d,PodSandboxId:70d31f118fbbd2aae7131af19a402a79bab029614903824a58b01397f3a2f100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699402801263850837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p27g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2474fe2-c0f8-42a0-b276-56ff1113cac5,},Annotations:map[string]string{io.kubernetes.container.hash: 1f4230ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb,PodSandboxId:c94a3ec1035c3cb4a0310b661854867b45d6bda9f1a0a50aba109be755c8ee85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699402800422911487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-tbfp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af8ab5b9-9401-4755-86af-663236159220,},Annotations:map[string]string{io.kubernetes.container.hash: 300a4655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189,PodSandboxId:c65c1d615c8a59dd0653646b7e1cfbaeee17f787f8067ea4f3e1bb3c53938c19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699402775795608444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdb0033a70b4c2a18dc2febf194bdbd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cc017a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54,PodSandboxId:8cae970912eac8b61cf49d014a86474988b437448577df4d3b45285d223bde9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699402774005442507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415,PodSandboxId:f2678d0a9b35be7d3215c2aaced7fc235864f613ba4cb57aa90c3a0cd60210ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699402774031829374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa,PodSandboxId:3d05ccd26595f87efc2ba3b8bda016418f12c0556864b66fc9932375f61a4dc9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699402773895990735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fb993be369e1a1142f88ada62a3c61,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7ba6a2ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e5a12b68-544b-4650-a98b-fd60c0412461 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.564419505Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4c2e0829-714a-4b63-9409-30525eefddcc name=/runtime.v1.RuntimeService/Version
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.564483150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4c2e0829-714a-4b63-9409-30525eefddcc name=/runtime.v1.RuntimeService/Version
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.565841027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=46e7df1b-a7aa-4849-85d8-d788ebc8d715 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.566244252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403537566232203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=46e7df1b-a7aa-4849-85d8-d788ebc8d715 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.566823096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f9fe5137-3855-4b8c-8014-2d9d8499189c name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.566925209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f9fe5137-3855-4b8c-8014-2d9d8499189c name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.567096395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d,PodSandboxId:7676c112a35a1d2ba86064ddc0f5c70700c18e8b67ed70907aa4dfa91d0ef49f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402801679591587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23d9653-c31d-4713-be02-30b067b1b6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 574f188d,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d,PodSandboxId:70d31f118fbbd2aae7131af19a402a79bab029614903824a58b01397f3a2f100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699402801263850837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p27g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2474fe2-c0f8-42a0-b276-56ff1113cac5,},Annotations:map[string]string{io.kubernetes.container.hash: 1f4230ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb,PodSandboxId:c94a3ec1035c3cb4a0310b661854867b45d6bda9f1a0a50aba109be755c8ee85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699402800422911487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-tbfp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af8ab5b9-9401-4755-86af-663236159220,},Annotations:map[string]string{io.kubernetes.container.hash: 300a4655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189,PodSandboxId:c65c1d615c8a59dd0653646b7e1cfbaeee17f787f8067ea4f3e1bb3c53938c19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699402775795608444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdb0033a70b4c2a18dc2febf194bdbd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cc017a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54,PodSandboxId:8cae970912eac8b61cf49d014a86474988b437448577df4d3b45285d223bde9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699402774005442507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415,PodSandboxId:f2678d0a9b35be7d3215c2aaced7fc235864f613ba4cb57aa90c3a0cd60210ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699402774031829374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa,PodSandboxId:3d05ccd26595f87efc2ba3b8bda016418f12c0556864b66fc9932375f61a4dc9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699402773895990735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fb993be369e1a1142f88ada62a3c61,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7ba6a2ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f9fe5137-3855-4b8c-8014-2d9d8499189c name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.611879386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=912e7ded-1e58-4331-806c-79e1d388ef6c name=/runtime.v1.RuntimeService/Version
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.611992454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=912e7ded-1e58-4331-806c-79e1d388ef6c name=/runtime.v1.RuntimeService/Version
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.613250219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=23c4e8d0-5342-4f5e-a324-653a276d7d46 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.613741582Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403537613727177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=23c4e8d0-5342-4f5e-a324-653a276d7d46 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.614788798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7fc8cd72-e7ee-48b2-9018-0d6bd0a32d03 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.614843837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7fc8cd72-e7ee-48b2-9018-0d6bd0a32d03 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.615018348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d,PodSandboxId:7676c112a35a1d2ba86064ddc0f5c70700c18e8b67ed70907aa4dfa91d0ef49f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402801679591587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23d9653-c31d-4713-be02-30b067b1b6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 574f188d,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d,PodSandboxId:70d31f118fbbd2aae7131af19a402a79bab029614903824a58b01397f3a2f100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699402801263850837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p27g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2474fe2-c0f8-42a0-b276-56ff1113cac5,},Annotations:map[string]string{io.kubernetes.container.hash: 1f4230ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb,PodSandboxId:c94a3ec1035c3cb4a0310b661854867b45d6bda9f1a0a50aba109be755c8ee85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699402800422911487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-tbfp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af8ab5b9-9401-4755-86af-663236159220,},Annotations:map[string]string{io.kubernetes.container.hash: 300a4655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189,PodSandboxId:c65c1d615c8a59dd0653646b7e1cfbaeee17f787f8067ea4f3e1bb3c53938c19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699402775795608444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdb0033a70b4c2a18dc2febf194bdbd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cc017a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54,PodSandboxId:8cae970912eac8b61cf49d014a86474988b437448577df4d3b45285d223bde9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699402774005442507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415,PodSandboxId:f2678d0a9b35be7d3215c2aaced7fc235864f613ba4cb57aa90c3a0cd60210ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699402774031829374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa,PodSandboxId:3d05ccd26595f87efc2ba3b8bda016418f12c0556864b66fc9932375f61a4dc9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699402773895990735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fb993be369e1a1142f88ada62a3c61,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7ba6a2ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7fc8cd72-e7ee-48b2-9018-0d6bd0a32d03 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.653396794Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=00b546d3-c186-4e5c-aaf1-0b7f9aed6eff name=/runtime.v1.RuntimeService/Version
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.653455604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=00b546d3-c186-4e5c-aaf1-0b7f9aed6eff name=/runtime.v1.RuntimeService/Version
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.654846062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c44832b6-2fcc-4339-bff6-16c195792904 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.655232629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1699403537655219183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=c44832b6-2fcc-4339-bff6-16c195792904 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.655815212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=512dfeba-e9a8-485b-9ab7-7ae4cace6487 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.655892336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=512dfeba-e9a8-485b-9ab7-7ae4cace6487 name=/runtime.v1.RuntimeService/ListContainers
	Nov 08 00:32:17 old-k8s-version-590541 crio[718]: time="2023-11-08 00:32:17.656069390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d,PodSandboxId:7676c112a35a1d2ba86064ddc0f5c70700c18e8b67ed70907aa4dfa91d0ef49f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1699402801679591587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23d9653-c31d-4713-be02-30b067b1b6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 574f188d,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d,PodSandboxId:70d31f118fbbd2aae7131af19a402a79bab029614903824a58b01397f3a2f100,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1699402801263850837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p27g4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2474fe2-c0f8-42a0-b276-56ff1113cac5,},Annotations:map[string]string{io.kubernetes.container.hash: 1f4230ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb,PodSandboxId:c94a3ec1035c3cb4a0310b661854867b45d6bda9f1a0a50aba109be755c8ee85,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1699402800422911487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-tbfp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af8ab5b9-9401-4755-86af-663236159220,},Annotations:map[string]string{io.kubernetes.container.hash: 300a4655,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189,PodSandboxId:c65c1d615c8a59dd0653646b7e1cfbaeee17f787f8067ea4f3e1bb3c53938c19,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1699402775795608444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afdb0033a70b4c2a18dc2febf194bdbd,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cc017a0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54,PodSandboxId:8cae970912eac8b61cf49d014a86474988b437448577df4d3b45285d223bde9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1699402774005442507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415,PodSandboxId:f2678d0a9b35be7d3215c2aaced7fc235864f613ba4cb57aa90c3a0cd60210ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1699402774031829374,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa,PodSandboxId:3d05ccd26595f87efc2ba3b8bda016418f12c0556864b66fc9932375f61a4dc9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1699402773895990735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-590541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fb993be369e1a1142f88ada62a3c61,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7ba6a2ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=512dfeba-e9a8-485b-9ab7-7ae4cace6487 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb87567dbf1a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   7676c112a35a1       storage-provisioner
	4ff54e527b90d       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   70d31f118fbbd       kube-proxy-p27g4
	dd93c4c016654       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   c94a3ec1035c3       coredns-5644d7b6d9-tbfp7
	7d506696340f3       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   12 minutes ago      Running             etcd                      0                   c65c1d615c8a5       etcd-old-k8s-version-590541
	58550bb028ada       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   12 minutes ago      Running             kube-controller-manager   0                   f2678d0a9b35b       kube-controller-manager-old-k8s-version-590541
	59c25719e59db       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   12 minutes ago      Running             kube-scheduler            0                   8cae970912eac       kube-scheduler-old-k8s-version-590541
	6b67fab18718e       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   12 minutes ago      Running             kube-apiserver            0                   3d05ccd26595f       kube-apiserver-old-k8s-version-590541
	
	* 
	* ==> coredns [dd93c4c016654cfa3004b5f68b3a6e7e5a9259b5dffcc48a12faeb01a28f9acb] <==
	* .:53
	2023-11-08T00:20:00.840Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-08T00:20:00.840Z [INFO] CoreDNS-1.6.2
	2023-11-08T00:20:00.840Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-08T00:20:35.733Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	[INFO] Reloading complete
	2023-11-08T00:20:35.751Z [INFO] 127.0.0.1:58167 - 21342 "HINFO IN 2034047240627481077.1396256950986485262. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017476798s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-590541
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-590541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=old-k8s-version-590541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T00_19_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 00:19:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 00:31:39 +0000   Wed, 08 Nov 2023 00:19:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 00:31:39 +0000   Wed, 08 Nov 2023 00:19:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 00:31:39 +0000   Wed, 08 Nov 2023 00:19:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 00:31:39 +0000   Wed, 08 Nov 2023 00:19:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.49
	  Hostname:    old-k8s-version-590541
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 ea38dbe27e1d423cb00439f981f4114c
	 System UUID:                ea38dbe2-7e1d-423c-b004-39f981f4114c
	 Boot ID:                    c6279805-6470-40f6-8b2b-2a2830f283de
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-tbfp7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-590541                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-590541             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-590541    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-p27g4                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-590541             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                metrics-server-74d5856cc6-b4rtb                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-590541     Node old-k8s-version-590541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-590541     Node old-k8s-version-590541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-590541     Node old-k8s-version-590541 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-590541  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov 8 00:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075823] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.825914] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.635735] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147958] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.785770] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov 8 00:14] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.129901] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.153234] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.117809] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.231016] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[ +20.122352] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +0.443718] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.765152] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.535089] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 8 00:19] systemd-fstab-generator[3195]: Ignoring "noauto" for root device
	[  +0.766169] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 8 00:20] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [7d506696340f36500b2c12181eca3a195084bbaa9507b0c84c4df15ce9771189] <==
	* 2023-11-08 00:19:35.931804 I | raft: 2916afbfe5f17297 became follower at term 0
	2023-11-08 00:19:35.931813 I | raft: newRaft 2916afbfe5f17297 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-08 00:19:35.931816 I | raft: 2916afbfe5f17297 became follower at term 1
	2023-11-08 00:19:35.946270 W | auth: simple token is not cryptographically signed
	2023-11-08 00:19:35.952513 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-08 00:19:35.953068 I | etcdserver: 2916afbfe5f17297 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-08 00:19:35.953910 I | etcdserver/membership: added member 2916afbfe5f17297 [https://192.168.50.49:2380] to cluster 44542e4adf58543b
	2023-11-08 00:19:35.959508 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-08 00:19:35.960079 I | embed: listening for metrics on http://192.168.50.49:2381
	2023-11-08 00:19:35.960259 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-08 00:19:36.032380 I | raft: 2916afbfe5f17297 is starting a new election at term 1
	2023-11-08 00:19:36.032462 I | raft: 2916afbfe5f17297 became candidate at term 2
	2023-11-08 00:19:36.032488 I | raft: 2916afbfe5f17297 received MsgVoteResp from 2916afbfe5f17297 at term 2
	2023-11-08 00:19:36.032508 I | raft: 2916afbfe5f17297 became leader at term 2
	2023-11-08 00:19:36.032612 I | raft: raft.node: 2916afbfe5f17297 elected leader 2916afbfe5f17297 at term 2
	2023-11-08 00:19:36.033104 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-08 00:19:36.034000 I | etcdserver: published {Name:old-k8s-version-590541 ClientURLs:[https://192.168.50.49:2379]} to cluster 44542e4adf58543b
	2023-11-08 00:19:36.034177 I | embed: ready to serve client requests
	2023-11-08 00:19:36.037399 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-08 00:19:36.037782 I | embed: ready to serve client requests
	2023-11-08 00:19:36.040507 I | embed: serving client requests on 192.168.50.49:2379
	2023-11-08 00:19:36.054612 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-08 00:19:36.054741 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-08 00:29:36.074452 I | mvcc: store.index: compact 668
	2023-11-08 00:29:36.077042 I | mvcc: finished scheduled compaction at 668 (took 2.129497ms)
	
	* 
	* ==> kernel <==
	*  00:32:18 up 18 min,  0 users,  load average: 0.11, 0.08, 0.10
	Linux old-k8s-version-590541 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6b67fab18718e30cc1c826158c31b300c007d62ebc0676b934f154f1442e6ffa] <==
	* I1108 00:24:40.332772       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:24:40.333107       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:24:40.333291       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:24:40.333334       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:25:40.333830       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:25:40.334046       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:25:40.334093       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:25:40.334114       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:27:40.334647       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:27:40.334759       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:27:40.334838       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:27:40.334849       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:29:40.335920       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:29:40.336225       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:29:40.336372       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:29:40.336399       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1108 00:30:40.336755       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1108 00:30:40.336837       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 00:30:40.336892       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1108 00:30:40.336903       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [58550bb028adaeff43b5b4b387c8c233db04bdb5c32d5d4cdce83e52fd4f4415] <==
	* E1108 00:26:02.274328       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:26:23.289006       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:26:32.526063       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:26:55.290949       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:27:02.777881       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:27:27.293214       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:27:33.030011       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:27:59.295188       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:28:03.282283       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:28:31.297097       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:28:33.534511       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:29:03.299096       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:29:03.787013       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1108 00:29:34.039285       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:29:35.301315       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:30:04.291481       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:30:07.303202       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:30:34.543965       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:30:39.305710       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:31:04.797191       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:31:11.308011       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:31:35.049066       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:31:43.310168       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1108 00:32:05.301346       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1108 00:32:15.312476       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [4ff54e527b90d3760072290ef2cf557ae01212dae0ecb5c2f9bfa3c9dfafc99d] <==
	* W1108 00:20:01.616170       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1108 00:20:01.630337       1 node.go:135] Successfully retrieved node IP: 192.168.50.49
	I1108 00:20:01.630455       1 server_others.go:149] Using iptables Proxier.
	I1108 00:20:01.631062       1 server.go:529] Version: v1.16.0
	I1108 00:20:01.634102       1 config.go:313] Starting service config controller
	I1108 00:20:01.634257       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1108 00:20:01.635832       1 config.go:131] Starting endpoints config controller
	I1108 00:20:01.635877       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1108 00:20:01.738794       1 shared_informer.go:204] Caches are synced for service config 
	I1108 00:20:01.738942       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [59c25719e59dbe1e0b49dc46a12c055e6114f3c50f8ec24d160bdb86d2b9cc54] <==
	* W1108 00:19:39.325394       1 authentication.go:79] Authentication is disabled
	I1108 00:19:39.325416       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1108 00:19:39.330968       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1108 00:19:39.382778       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 00:19:39.382954       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:19:39.383088       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 00:19:39.390874       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1108 00:19:39.391026       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:19:39.391096       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:19:39.391141       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 00:19:39.391184       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:19:39.391234       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 00:19:39.391277       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 00:19:39.391323       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 00:19:40.384594       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 00:19:40.385650       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 00:19:40.392800       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 00:19:40.395132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1108 00:19:40.397128       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 00:19:40.400053       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 00:19:40.400415       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 00:19:40.404773       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 00:19:40.407133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 00:19:40.411226       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 00:19:40.412378       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 00:13:55 UTC, ends at Wed 2023-11-08 00:32:18 UTC. --
	Nov 08 00:27:59 old-k8s-version-590541 kubelet[3201]: E1108 00:27:59.627079    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:28:11 old-k8s-version-590541 kubelet[3201]: E1108 00:28:11.626687    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:28:26 old-k8s-version-590541 kubelet[3201]: E1108 00:28:26.627005    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:28:38 old-k8s-version-590541 kubelet[3201]: E1108 00:28:38.627324    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:28:50 old-k8s-version-590541 kubelet[3201]: E1108 00:28:50.627812    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:02 old-k8s-version-590541 kubelet[3201]: E1108 00:29:02.626906    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:17 old-k8s-version-590541 kubelet[3201]: E1108 00:29:17.626963    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:28 old-k8s-version-590541 kubelet[3201]: E1108 00:29:28.626673    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:32 old-k8s-version-590541 kubelet[3201]: E1108 00:29:32.726366    3201 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Nov 08 00:29:40 old-k8s-version-590541 kubelet[3201]: E1108 00:29:40.626982    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:29:51 old-k8s-version-590541 kubelet[3201]: E1108 00:29:51.627025    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:30:05 old-k8s-version-590541 kubelet[3201]: E1108 00:30:05.627228    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:30:20 old-k8s-version-590541 kubelet[3201]: E1108 00:30:20.626954    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:30:34 old-k8s-version-590541 kubelet[3201]: E1108 00:30:34.656377    3201 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 08 00:30:34 old-k8s-version-590541 kubelet[3201]: E1108 00:30:34.656454    3201 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 08 00:30:34 old-k8s-version-590541 kubelet[3201]: E1108 00:30:34.656500    3201 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 08 00:30:34 old-k8s-version-590541 kubelet[3201]: E1108 00:30:34.656596    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 08 00:30:46 old-k8s-version-590541 kubelet[3201]: E1108 00:30:46.627744    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:30:58 old-k8s-version-590541 kubelet[3201]: E1108 00:30:58.629143    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:31:09 old-k8s-version-590541 kubelet[3201]: E1108 00:31:09.626829    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:31:23 old-k8s-version-590541 kubelet[3201]: E1108 00:31:23.627435    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:31:35 old-k8s-version-590541 kubelet[3201]: E1108 00:31:35.627152    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:31:48 old-k8s-version-590541 kubelet[3201]: E1108 00:31:48.626862    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:31:59 old-k8s-version-590541 kubelet[3201]: E1108 00:31:59.627070    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 08 00:32:10 old-k8s-version-590541 kubelet[3201]: E1108 00:32:10.627363    3201 pod_workers.go:191] Error syncing pod bfd72ad0-3c33-4a96-88b1-f18bc20b224c ("metrics-server-74d5856cc6-b4rtb_kube-system(bfd72ad0-3c33-4a96-88b1-f18bc20b224c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [cb87567dbf1a28ba3db5bc16945a47009d33ef3a348f951bd546c8806b60243d] <==
	* I1108 00:20:01.833277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 00:20:01.881419       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 00:20:01.881519       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 00:20:01.913318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 00:20:01.915178       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-590541_b6996c9c-33bf-475b-98b2-3062155f53de!
	I1108 00:20:01.922393       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a277f195-c4dc-42dc-b3b4-4c761e9d10cf", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-590541_b6996c9c-33bf-475b-98b2-3062155f53de became leader
	I1108 00:20:02.016915       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-590541_b6996c9c-33bf-475b-98b2-3062155f53de!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-590541 -n old-k8s-version-590541
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-590541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-b4rtb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-590541 describe pod metrics-server-74d5856cc6-b4rtb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-590541 describe pod metrics-server-74d5856cc6-b4rtb: exit status 1 (67.069529ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-b4rtb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-590541 describe pod metrics-server-74d5856cc6-b4rtb: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (130.33s)

                                                
                                    

Test pass (230/294)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 47.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.3/json-events 15.11
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.56
20 TestOffline 132.39
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 208.93
27 TestAddons/parallel/Registry 25.64
29 TestAddons/parallel/InspektorGadget 10.98
30 TestAddons/parallel/MetricsServer 5.89
31 TestAddons/parallel/HelmTiller 12.28
33 TestAddons/parallel/CSI 58.54
34 TestAddons/parallel/Headlamp 23.1
35 TestAddons/parallel/CloudSpanner 6.5
36 TestAddons/parallel/LocalPath 65.9
37 TestAddons/parallel/NvidiaDevicePlugin 5.93
40 TestAddons/serial/GCPAuth/Namespaces 0.12
42 TestCertOptions 115.29
43 TestCertExpiration 281.1
45 TestForceSystemdFlag 77.08
46 TestForceSystemdEnv 62.3
48 TestKVMDriverInstallOrUpdate 5.54
52 TestErrorSpam/setup 46.99
53 TestErrorSpam/start 0.38
54 TestErrorSpam/status 0.75
55 TestErrorSpam/pause 1.54
56 TestErrorSpam/unpause 1.68
57 TestErrorSpam/stop 2.26
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 64.48
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 53.6
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.05
69 TestFunctional/serial/CacheCmd/cache/add_local 2.34
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
77 TestFunctional/serial/ExtraConfig 39.53
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 1.52
80 TestFunctional/serial/LogsFileCmd 1.47
81 TestFunctional/serial/InvalidService 4.25
83 TestFunctional/parallel/ConfigCmd 0.38
84 TestFunctional/parallel/DashboardCmd 21.02
85 TestFunctional/parallel/DryRun 0.3
86 TestFunctional/parallel/InternationalLanguage 0.15
87 TestFunctional/parallel/StatusCmd 1.19
91 TestFunctional/parallel/ServiceCmdConnect 14.55
92 TestFunctional/parallel/AddonsCmd 0.14
93 TestFunctional/parallel/PersistentVolumeClaim 61.38
95 TestFunctional/parallel/SSHCmd 0.48
96 TestFunctional/parallel/CpCmd 0.93
97 TestFunctional/parallel/MySQL 29.9
98 TestFunctional/parallel/FileSync 0.24
99 TestFunctional/parallel/CertSync 1.41
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
111 TestFunctional/parallel/Version/short 0.06
112 TestFunctional/parallel/Version/components 0.51
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
117 TestFunctional/parallel/ImageCommands/ImageBuild 6.9
118 TestFunctional/parallel/ImageCommands/Setup 2.36
119 TestFunctional/parallel/MountCmd/any-port 25.73
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.7
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.6
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.58
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.13
126 TestFunctional/parallel/MountCmd/specific-port 2.01
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 5.86
128 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
129 TestFunctional/parallel/ServiceCmd/DeployApp 14.25
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
131 TestFunctional/parallel/ProfileCmd/profile_list 0.34
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
136 TestFunctional/parallel/ServiceCmd/List 1.25
143 TestFunctional/parallel/ServiceCmd/JSONOutput 1.26
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
145 TestFunctional/parallel/ServiceCmd/Format 0.34
146 TestFunctional/parallel/ServiceCmd/URL 0.37
147 TestFunctional/delete_addon-resizer_images 0.06
148 TestFunctional/delete_my-image_image 0.01
149 TestFunctional/delete_minikube_cached_images 0.01
153 TestIngressAddonLegacy/StartLegacyK8sCluster 111.75
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.43
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.57
160 TestJSONOutput/start/Command 70.07
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 0.7
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 0.66
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 7.11
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 0.22
188 TestMainNoArgs 0.06
189 TestMinikubeProfile 97.04
192 TestMountStart/serial/StartWithMountFirst 29.97
193 TestMountStart/serial/VerifyMountFirst 0.41
194 TestMountStart/serial/StartWithMountSecond 29.52
195 TestMountStart/serial/VerifyMountSecond 0.38
196 TestMountStart/serial/DeleteFirst 0.88
197 TestMountStart/serial/VerifyMountPostDelete 0.38
198 TestMountStart/serial/Stop 2.09
199 TestMountStart/serial/RestartStopped 22.26
200 TestMountStart/serial/VerifyMountPostStop 0.38
203 TestMultiNode/serial/FreshStart2Nodes 112.53
204 TestMultiNode/serial/DeployApp2Nodes 6.37
206 TestMultiNode/serial/AddNode 45.76
207 TestMultiNode/serial/ProfileList 0.22
208 TestMultiNode/serial/CopyFile 7.58
209 TestMultiNode/serial/StopNode 2.97
210 TestMultiNode/serial/StartAfterStop 31.75
212 TestMultiNode/serial/DeleteNode 1.78
214 TestMultiNode/serial/RestartMultiNode 445.11
215 TestMultiNode/serial/ValidateNameConflict 48.8
222 TestScheduledStopUnix 120.5
228 TestKubernetesUpgrade 197.1
232 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
240 TestPause/serial/Start 105.22
241 TestNoKubernetes/serial/StartWithK8s 110.44
243 TestNoKubernetes/serial/StartWithStopK8s 7.15
244 TestNoKubernetes/serial/Start 29.07
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
246 TestNoKubernetes/serial/ProfileList 1.25
247 TestNoKubernetes/serial/Stop 1.36
248 TestNoKubernetes/serial/StartNoArgs 45.93
256 TestNetworkPlugins/group/false 3.32
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
262 TestStartStop/group/old-k8s-version/serial/FirstStart 172.99
263 TestStoppedBinaryUpgrade/Setup 1.91
266 TestStartStop/group/no-preload/serial/FirstStart 120.57
267 TestStartStop/group/old-k8s-version/serial/DeployApp 12.54
268 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.97
271 TestStartStop/group/embed-certs/serial/FirstStart 61.27
272 TestStartStop/group/no-preload/serial/DeployApp 11.5
273 TestStartStop/group/embed-certs/serial/DeployApp 11.44
274 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
276 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
278 TestStoppedBinaryUpgrade/MinikubeLogs 0.39
280 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 99.49
282 TestStartStop/group/old-k8s-version/serial/SecondStart 791.21
285 TestStartStop/group/no-preload/serial/SecondStart 572.34
286 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.41
287 TestStartStop/group/embed-certs/serial/SecondStart 564.64
288 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
291 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 424.95
301 TestStartStop/group/newest-cni/serial/FirstStart 62.55
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.58
304 TestStartStop/group/newest-cni/serial/Stop 10.46
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/newest-cni/serial/SecondStart 53.4
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
310 TestStartStop/group/newest-cni/serial/Pause 3
311 TestNetworkPlugins/group/auto/Start 69.82
312 TestNetworkPlugins/group/kindnet/Start 102.13
313 TestNetworkPlugins/group/calico/Start 103.72
314 TestNetworkPlugins/group/auto/KubeletFlags 0.23
315 TestNetworkPlugins/group/auto/NetCatPod 12.4
316 TestNetworkPlugins/group/auto/DNS 0.2
317 TestNetworkPlugins/group/auto/Localhost 0.16
318 TestNetworkPlugins/group/auto/HairPin 0.18
319 TestNetworkPlugins/group/custom-flannel/Start 91.85
320 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
321 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
322 TestNetworkPlugins/group/kindnet/NetCatPod 12.44
323 TestNetworkPlugins/group/kindnet/DNS 0.19
324 TestNetworkPlugins/group/kindnet/Localhost 0.17
325 TestNetworkPlugins/group/kindnet/HairPin 0.18
326 TestNetworkPlugins/group/enable-default-cni/Start 101.77
327 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
328 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.08
329 TestNetworkPlugins/group/flannel/Start 99.62
330 TestNetworkPlugins/group/calico/ControllerPod 5.04
331 TestNetworkPlugins/group/calico/KubeletFlags 0.23
332 TestNetworkPlugins/group/calico/NetCatPod 12.49
333 TestNetworkPlugins/group/calico/DNS 0.19
334 TestNetworkPlugins/group/calico/Localhost 0.17
335 TestNetworkPlugins/group/calico/HairPin 0.17
336 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
337 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.4
338 TestNetworkPlugins/group/custom-flannel/DNS 0.3
339 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
340 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
341 TestNetworkPlugins/group/bridge/Start 63.93
342 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
343 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.88
344 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
345 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
346 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
347 TestNetworkPlugins/group/flannel/ControllerPod 5.03
348 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
349 TestNetworkPlugins/group/flannel/NetCatPod 13.42
350 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
351 TestNetworkPlugins/group/bridge/NetCatPod 10.32
352 TestNetworkPlugins/group/flannel/DNS 0.16
353 TestNetworkPlugins/group/flannel/Localhost 0.14
354 TestNetworkPlugins/group/flannel/HairPin 0.17
355 TestNetworkPlugins/group/bridge/DNS 33.35
356 TestNetworkPlugins/group/bridge/Localhost 0.15
357 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (47.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-759760 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-759760 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (47.847749444s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (47.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-759760
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-759760: exit status 85 (72.961974ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-759760 | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |          |
	|         | -p download-only-759760        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:01:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:01:05.936665   16859 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:01:05.936789   16859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:05.936799   16859 out.go:309] Setting ErrFile to fd 2...
	I1107 23:01:05.936804   16859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:05.937002   16859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	W1107 23:01:05.937110   16859 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-9647/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-9647/.minikube/config/config.json: no such file or directory
	I1107 23:01:05.937668   16859 out.go:303] Setting JSON to true
	I1107 23:01:05.938539   16859 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2615,"bootTime":1699395451,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:01:05.938607   16859 start.go:138] virtualization: kvm guest
	I1107 23:01:05.941064   16859 out.go:97] [download-only-759760] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:01:05.942596   16859 out.go:169] MINIKUBE_LOCATION=17585
	W1107 23:01:05.941192   16859 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball: no such file or directory
	I1107 23:01:05.941266   16859 notify.go:220] Checking for updates...
	I1107 23:01:05.945084   16859 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:01:05.946504   16859 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:01:05.947868   16859 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:01:05.949224   16859 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1107 23:01:05.951269   16859 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 23:01:05.951465   16859 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:01:06.049874   16859 out.go:97] Using the kvm2 driver based on user configuration
	I1107 23:01:06.049905   16859 start.go:298] selected driver: kvm2
	I1107 23:01:06.049910   16859 start.go:902] validating driver "kvm2" against <nil>
	I1107 23:01:06.050199   16859 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:01:06.050321   16859 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:01:06.064139   16859 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:01:06.064186   16859 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:01:06.064638   16859 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1107 23:01:06.064793   16859 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 23:01:06.064868   16859 cni.go:84] Creating CNI manager for ""
	I1107 23:01:06.064883   16859 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:01:06.064892   16859 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1107 23:01:06.064900   16859 start_flags.go:323] config:
	{Name:download-only-759760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-759760 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:01:06.065097   16859 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:01:06.066966   16859 out.go:97] Downloading VM boot image ...
	I1107 23:01:06.066987   16859 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1107 23:01:15.597912   16859 out.go:97] Starting control plane node download-only-759760 in cluster download-only-759760
	I1107 23:01:15.597934   16859 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1107 23:01:15.707006   16859 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1107 23:01:15.707038   16859 cache.go:56] Caching tarball of preloaded images
	I1107 23:01:15.707172   16859 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1107 23:01:15.709284   16859 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 23:01:15.709302   16859 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:01:15.822418   16859 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1107 23:01:29.671949   16859 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:01:29.672046   16859 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:01:30.569709   16859 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1107 23:01:30.570113   16859 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/download-only-759760/config.json ...
	I1107 23:01:30.570146   16859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/download-only-759760/config.json: {Name:mk7317aa9a26d8aac67190918ead9d70d319d85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:01:30.570336   16859 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1107 23:01:30.570563   16859 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-759760"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (15.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-759760 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-759760 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.108707447s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (15.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-759760
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-759760: exit status 85 (71.496727ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-759760 | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |          |
	|         | -p download-only-759760        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-759760 | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |          |
	|         | -p download-only-759760        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:01:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:01:53.856602   17006 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:01:53.856720   17006 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:53.856730   17006 out.go:309] Setting ErrFile to fd 2...
	I1107 23:01:53.856734   17006 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:53.856988   17006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	W1107 23:01:53.857099   17006 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-9647/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-9647/.minikube/config/config.json: no such file or directory
	I1107 23:01:53.857507   17006 out.go:303] Setting JSON to true
	I1107 23:01:53.858297   17006 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2663,"bootTime":1699395451,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:01:53.858355   17006 start.go:138] virtualization: kvm guest
	I1107 23:01:53.860674   17006 out.go:97] [download-only-759760] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:01:53.862331   17006 out.go:169] MINIKUBE_LOCATION=17585
	I1107 23:01:53.860911   17006 notify.go:220] Checking for updates...
	I1107 23:01:53.865350   17006 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:01:53.866736   17006 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:01:53.868155   17006 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:01:53.869533   17006 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1107 23:01:53.872169   17006 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 23:01:53.872636   17006 config.go:182] Loaded profile config "download-only-759760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1107 23:01:53.872681   17006 start.go:810] api.Load failed for download-only-759760: filestore "download-only-759760": Docker machine "download-only-759760" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 23:01:53.872755   17006 driver.go:378] Setting default libvirt URI to qemu:///system
	W1107 23:01:53.872786   17006 start.go:810] api.Load failed for download-only-759760: filestore "download-only-759760": Docker machine "download-only-759760" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 23:01:53.904110   17006 out.go:97] Using the kvm2 driver based on existing profile
	I1107 23:01:53.904146   17006 start.go:298] selected driver: kvm2
	I1107 23:01:53.904152   17006 start.go:902] validating driver "kvm2" against &{Name:download-only-759760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-on
ly-759760 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:01:53.904558   17006 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:01:53.904640   17006 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17585-9647/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1107 23:01:53.918711   17006 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1107 23:01:53.919511   17006 cni.go:84] Creating CNI manager for ""
	I1107 23:01:53.919529   17006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1107 23:01:53.919541   17006 start_flags.go:323] config:
	{Name:download-only-759760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-759760 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:01:53.919664   17006 iso.go:125] acquiring lock: {Name:mk02d02b2a7a45dbdd1b46a32fb0724673cb4d8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:01:53.921713   17006 out.go:97] Starting control plane node download-only-759760 in cluster download-only-759760
	I1107 23:01:53.921732   17006 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:01:54.116948   17006 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:01:54.116999   17006 cache.go:56] Caching tarball of preloaded images
	I1107 23:01:54.117163   17006 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:01:54.119287   17006 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1107 23:01:54.119310   17006 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:01:54.234209   17006 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:02:07.158029   17006 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:02:07.158141   17006 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-9647/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:02:08.085812   17006 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:02:08.085975   17006 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/download-only-759760/config.json ...
	I1107 23:02:08.086197   17006 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:02:08.086384   17006 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17585-9647/.minikube/cache/linux/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-759760"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-759760
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-367625 --alsologtostderr --binary-mirror http://127.0.0.1:44133 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-367625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-367625
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (132.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-711737 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-711737 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m11.345824037s)
helpers_test.go:175: Cleaning up "offline-crio-711737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-711737
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-711737: (1.039583641s)
--- PASS: TestOffline (132.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-245409
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-245409: exit status 85 (64.903664ms)

                                                
                                                
-- stdout --
	* Profile "addons-245409" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245409"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-245409
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-245409: exit status 85 (65.371194ms)

                                                
                                                
-- stdout --
	* Profile "addons-245409" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245409"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (208.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-245409 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-245409 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m28.925864377s)
--- PASS: TestAddons/Setup (208.93s)

                                                
                                    
x
+
TestAddons/parallel/Registry (25.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 31.504647ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-d4mmm" [283feeee-43ee-480c-87f6-a6c43b6de51a] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014029421s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8s5cm" [5e9016ce-da43-4c67-babc-187a8a67e262] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018778455s
addons_test.go:339: (dbg) Run:  kubectl --context addons-245409 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-245409 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-245409 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.193990091s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 ip
2023/11/07 23:06:03 [DEBUG] GET http://192.168.39.205:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 addons disable registry --alsologtostderr -v=1
addons_test.go:387: (dbg) Done: out/minikube-linux-amd64 -p addons-245409 addons disable registry --alsologtostderr -v=1: (1.211932302s)
--- PASS: TestAddons/parallel/Registry (25.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dqjkv" [28a93c72-d684-45e6-b98d-807359cfd095] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012496115s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-245409
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-245409: (5.963460834s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.244671ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-br2l5" [e2f32ef9-196f-4d46-b737-eb9c5c547080] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014363492s
addons_test.go:414: (dbg) Run:  kubectl --context addons-245409 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.28s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.957886ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-g4rcj" [6f59e490-1ddf-4d4c-bf7e-ca497ac5f742] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.017435144s
addons_test.go:472: (dbg) Run:  kubectl --context addons-245409 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-245409 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.61655946s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 35.717214ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-245409 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-245409 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [87afc978-760f-445d-8aab-49e3ded53ee3] Pending
helpers_test.go:344: "task-pv-pod" [87afc978-760f-445d-8aab-49e3ded53ee3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [87afc978-760f-445d-8aab-49e3ded53ee3] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.030453693s
addons_test.go:583: (dbg) Run:  kubectl --context addons-245409 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245409 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245409 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245409 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-245409 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-245409 delete pod task-pv-pod: (2.466151327s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-245409 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-245409 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-245409 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4b955821-c046-48e7-b01e-3b7eba4ff897] Pending
helpers_test.go:344: "task-pv-pod-restore" [4b955821-c046-48e7-b01e-3b7eba4ff897] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4b955821-c046-48e7-b01e-3b7eba4ff897] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.033464103s
addons_test.go:625: (dbg) Run:  kubectl --context addons-245409 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-245409 delete pod task-pv-pod-restore: (1.039643935s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-245409 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-245409 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-245409 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.10778678s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-245409 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-245409 --alsologtostderr -v=1: (2.081297578s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-9rm52" [a844ff26-24d3-44ff-8137-3b41431422ff] Pending
helpers_test.go:344: "headlamp-94b766c-9rm52" [a844ff26-24d3-44ff-8137-3b41431422ff] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-9rm52" [a844ff26-24d3-44ff-8137-3b41431422ff] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.01478424s
--- PASS: TestAddons/parallel/Headlamp (23.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-zt7nl" [c74a7d58-3786-450c-a110-03ffc9332e48] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.028479893s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-245409
addons_test.go:859: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-245409: (1.465006011s)
--- PASS: TestAddons/parallel/CloudSpanner (6.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (65.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-245409 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-245409 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [95f52684-76c4-46b2-b439-c2238b77128d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [95f52684-76c4-46b2-b439-c2238b77128d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [95f52684-76c4-46b2-b439-c2238b77128d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 12.012363736s
addons_test.go:890: (dbg) Run:  kubectl --context addons-245409 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 ssh "cat /opt/local-path-provisioner/pvc-edd8fb6e-35c5-4be0-b56c-b28712df861d_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-245409 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-245409 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-245409 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-245409 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.235076311s)
--- PASS: TestAddons/parallel/LocalPath (65.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fwsrr" [e19bacb9-6af8-46c3-96bf-707e41e6702b] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.043237728s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-245409
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-245409 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-245409 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (115.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-711796 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-711796 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m53.999696422s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-711796 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-711796 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-711796 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-711796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-711796
--- PASS: TestCertOptions (115.29s)

                                                
                                    
x
+
TestCertExpiration (281.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-484343 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-484343 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (56.62240142s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-484343 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-484343 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (43.445418652s)
helpers_test.go:175: Cleaning up "cert-expiration-484343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-484343
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-484343: (1.025998166s)
--- PASS: TestCertExpiration (281.10s)

                                                
                                    
x
+
TestForceSystemdFlag (77.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-602932 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-602932 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.069249886s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-602932 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-602932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-602932
--- PASS: TestForceSystemdFlag (77.08s)

                                                
                                    
x
+
TestForceSystemdEnv (62.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-420594 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-420594 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.468948784s)
helpers_test.go:175: Cleaning up "force-systemd-env-420594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-420594
--- PASS: TestForceSystemdEnv (62.30s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.54s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.54s)

                                                
                                    
x
+
TestErrorSpam/setup (46.99s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-832021 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-832021 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-832021 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-832021 --driver=kvm2  --container-runtime=crio: (46.98716264s)
--- PASS: TestErrorSpam/setup (46.99s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 stop: (2.090678688s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832021 --log_dir /tmp/nospam-832021 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17585-9647/.minikube/files/etc/test/nested/copy/16848/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-514284 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-514284 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m4.474660502s)
--- PASS: TestFunctional/serial/StartWithProxy (64.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-514284 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-514284 --alsologtostderr -v=8: (53.602257248s)
functional_test.go:659: soft start took 53.602929879s for "functional-514284" cluster.
--- PASS: TestFunctional/serial/SoftStart (53.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-514284 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 cache add registry.k8s.io/pause:3.3: (1.109012231s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 cache add registry.k8s.io/pause:latest: (1.000135095s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-514284 /tmp/TestFunctionalserialCacheCmdcacheadd_local1944808016/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 cache add minikube-local-cache-test:functional-514284
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 cache add minikube-local-cache-test:functional-514284: (2.000756561s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 cache delete minikube-local-cache-test:functional-514284
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-514284
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-514284 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.757414ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 kubectl -- --context functional-514284 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-514284 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-514284 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-514284 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.531224948s)
functional_test.go:757: restart took 39.53135192s for "functional-514284" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-514284 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 logs: (1.517894791s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 logs --file /tmp/TestFunctionalserialLogsFileCmd471636679/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 logs --file /tmp/TestFunctionalserialLogsFileCmd471636679/001/logs.txt: (1.473974342s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-514284 apply -f testdata/invalidsvc.yaml
E1107 23:15:38.956288   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:15:38.962325   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:15:38.972584   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:15:38.992872   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:15:39.033154   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:15:39.113475   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:15:39.273930   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:15:39.594539   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:15:40.235475   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-514284
E1107 23:15:41.516145   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-514284: exit status 115 (294.354363ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.155:31042 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-514284 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-514284 config get cpus: exit status 14 (60.177244ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-514284 config get cpus: exit status 14 (58.35429ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-514284 --alsologtostderr -v=1]
E1107 23:16:19.919855   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-514284 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24739: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-514284 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-514284 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (149.516179ms)

                                                
                                                
-- stdout --
	* [functional-514284] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:16:19.080923   24647 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:16:19.081078   24647 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:16:19.081086   24647 out.go:309] Setting ErrFile to fd 2...
	I1107 23:16:19.081092   24647 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:16:19.081255   24647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1107 23:16:19.081805   24647 out.go:303] Setting JSON to false
	I1107 23:16:19.082696   24647 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3528,"bootTime":1699395451,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:16:19.082759   24647 start.go:138] virtualization: kvm guest
	I1107 23:16:19.084708   24647 out.go:177] * [functional-514284] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:16:19.086475   24647 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:16:19.086482   24647 notify.go:220] Checking for updates...
	I1107 23:16:19.087841   24647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:16:19.088975   24647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:16:19.090280   24647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:16:19.091633   24647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:16:19.092994   24647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:16:19.094757   24647 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:16:19.095185   24647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:16:19.095248   24647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:16:19.109984   24647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44689
	I1107 23:16:19.110402   24647 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:16:19.110938   24647 main.go:141] libmachine: Using API Version  1
	I1107 23:16:19.110961   24647 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:16:19.111322   24647 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:16:19.111483   24647 main.go:141] libmachine: (functional-514284) Calling .DriverName
	I1107 23:16:19.111693   24647 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:16:19.112110   24647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:16:19.112153   24647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:16:19.126291   24647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I1107 23:16:19.126653   24647 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:16:19.127175   24647 main.go:141] libmachine: Using API Version  1
	I1107 23:16:19.127203   24647 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:16:19.127507   24647 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:16:19.127717   24647 main.go:141] libmachine: (functional-514284) Calling .DriverName
	I1107 23:16:19.159495   24647 out.go:177] * Using the kvm2 driver based on existing profile
	I1107 23:16:19.160977   24647 start.go:298] selected driver: kvm2
	I1107 23:16:19.160993   24647 start.go:902] validating driver "kvm2" against &{Name:functional-514284 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-514
284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.155 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:16:19.161113   24647 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:16:19.163476   24647 out.go:177] 
	W1107 23:16:19.164991   24647 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 23:16:19.166341   24647 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-514284 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-514284 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-514284 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.529578ms)

                                                
                                                
-- stdout --
	* [functional-514284] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:16:17.262494   24468 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:16:17.262625   24468 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:16:17.262634   24468 out.go:309] Setting ErrFile to fd 2...
	I1107 23:16:17.262642   24468 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:16:17.262908   24468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1107 23:16:17.263484   24468 out.go:303] Setting JSON to false
	I1107 23:16:17.264355   24468 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3526,"bootTime":1699395451,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:16:17.264419   24468 start.go:138] virtualization: kvm guest
	I1107 23:16:17.266667   24468 out.go:177] * [functional-514284] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1107 23:16:17.268693   24468 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:16:17.268685   24468 notify.go:220] Checking for updates...
	I1107 23:16:17.270384   24468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:16:17.271985   24468 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1107 23:16:17.273385   24468 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1107 23:16:17.274891   24468 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:16:17.276361   24468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:16:17.278130   24468 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:16:17.278556   24468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:16:17.278597   24468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:16:17.293207   24468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42409
	I1107 23:16:17.293649   24468 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:16:17.294190   24468 main.go:141] libmachine: Using API Version  1
	I1107 23:16:17.294217   24468 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:16:17.294558   24468 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:16:17.294737   24468 main.go:141] libmachine: (functional-514284) Calling .DriverName
	I1107 23:16:17.294972   24468 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:16:17.295289   24468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:16:17.295338   24468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:16:17.310953   24468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
	I1107 23:16:17.311335   24468 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:16:17.311854   24468 main.go:141] libmachine: Using API Version  1
	I1107 23:16:17.311873   24468 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:16:17.312168   24468 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:16:17.312330   24468 main.go:141] libmachine: (functional-514284) Calling .DriverName
	I1107 23:16:17.345492   24468 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1107 23:16:17.347027   24468 start.go:298] selected driver: kvm2
	I1107 23:16:17.347046   24468 start.go:902] validating driver "kvm2" against &{Name:functional-514284 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-514
284 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.155 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:16:17.347164   24468 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:16:17.349353   24468 out.go:177] 
	W1107 23:16:17.350628   24468 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 23:16:17.351834   24468 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-514284 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-514284 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-8w2w8" [d5b52c36-07c1-494e-a921-e388c5f94435] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-8w2w8" [d5b52c36-07c1-494e-a921-e388c5f94435] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.033263101s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.155:32164
functional_test.go:1674: http://192.168.50.155:32164: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-8w2w8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.155:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.155:32164
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (14.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (61.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [87c4859b-19ba-4c93-9ad0-7ede3c36199f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.035107405s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-514284 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-514284 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-514284 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-514284 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [261451fa-92bd-4c64-9a54-3697b0a80cd2] Pending
helpers_test.go:344: "sp-pod" [261451fa-92bd-4c64-9a54-3697b0a80cd2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1107 23:15:49.198022   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [261451fa-92bd-4c64-9a54-3697b0a80cd2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 35.02654966s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-514284 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-514284 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-514284 delete -f testdata/storage-provisioner/pod.yaml: (1.348042199s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-514284 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [881a5a4c-6102-4069-a2ee-4996d28e0925] Pending
helpers_test.go:344: "sp-pod" [881a5a4c-6102-4069-a2ee-4996d28e0925] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [881a5a4c-6102-4069-a2ee-4996d28e0925] Running
2023/11/07 23:16:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.025095967s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-514284 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (61.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh -n functional-514284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 cp functional-514284:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd511670901/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh -n functional-514284 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-514284 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-2mplb" [c22a3b5f-c6fb-46ad-ae8b-fd378cfb6860] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-2mplb" [c22a3b5f-c6fb-46ad-ae8b-fd378cfb6860] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.025852814s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-514284 exec mysql-859648c796-2mplb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-514284 exec mysql-859648c796-2mplb -- mysql -ppassword -e "show databases;": exit status 1 (316.191828ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-514284 exec mysql-859648c796-2mplb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-514284 exec mysql-859648c796-2mplb -- mysql -ppassword -e "show databases;": exit status 1 (572.787ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-514284 exec mysql-859648c796-2mplb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.90s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16848/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo cat /etc/test/nested/copy/16848/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16848.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo cat /etc/ssl/certs/16848.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16848.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo cat /usr/share/ca-certificates/16848.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/168482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo cat /etc/ssl/certs/168482.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/168482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo cat /usr/share/ca-certificates/168482.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-514284 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
E1107 23:15:44.077179   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-514284 ssh "sudo systemctl is-active docker": exit status 1 (248.928836ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-514284 ssh "sudo systemctl is-active containerd": exit status 1 (215.657298ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-514284 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-514284
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-514284
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-514284 image ls --format short --alsologtostderr:
I1107 23:16:28.714083   24983 out.go:296] Setting OutFile to fd 1 ...
I1107 23:16:28.714183   24983 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:28.714191   24983 out.go:309] Setting ErrFile to fd 2...
I1107 23:16:28.714195   24983 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:28.714374   24983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
I1107 23:16:28.714906   24983 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:28.714998   24983 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:28.715474   24983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:28.715534   24983 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:28.729174   24983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
I1107 23:16:28.729561   24983 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:28.730137   24983 main.go:141] libmachine: Using API Version  1
I1107 23:16:28.730166   24983 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:28.730455   24983 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:28.730625   24983 main.go:141] libmachine: (functional-514284) Calling .GetState
I1107 23:16:28.732329   24983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:28.732365   24983 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:28.745866   24983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
I1107 23:16:28.746251   24983 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:28.746666   24983 main.go:141] libmachine: Using API Version  1
I1107 23:16:28.746686   24983 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:28.746962   24983 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:28.747132   24983 main.go:141] libmachine: (functional-514284) Calling .DriverName
I1107 23:16:28.747300   24983 ssh_runner.go:195] Run: systemctl --version
I1107 23:16:28.747324   24983 main.go:141] libmachine: (functional-514284) Calling .GetSSHHostname
I1107 23:16:28.749733   24983 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:28.750111   24983 main.go:141] libmachine: (functional-514284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:cb:80", ip: ""} in network mk-functional-514284: {Iface:virbr1 ExpiryTime:2023-11-08 00:13:05 +0000 UTC Type:0 Mac:52:54:00:ac:cb:80 Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:functional-514284 Clientid:01:52:54:00:ac:cb:80}
I1107 23:16:28.750140   24983 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined IP address 192.168.50.155 and MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:28.750259   24983 main.go:141] libmachine: (functional-514284) Calling .GetSSHPort
I1107 23:16:28.750396   24983 main.go:141] libmachine: (functional-514284) Calling .GetSSHKeyPath
I1107 23:16:28.750547   24983 main.go:141] libmachine: (functional-514284) Calling .GetSSHUsername
I1107 23:16:28.750688   24983 sshutil.go:53] new ssh client: &{IP:192.168.50.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/functional-514284/id_rsa Username:docker}
I1107 23:16:28.835024   24983 ssh_runner.go:195] Run: sudo crictl images --output json
I1107 23:16:28.881294   24983 main.go:141] libmachine: Making call to close driver server
I1107 23:16:28.881306   24983 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:28.881566   24983 main.go:141] libmachine: (functional-514284) DBG | Closing plugin on server side
I1107 23:16:28.881573   24983 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:28.881603   24983 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:16:28.881611   24983 main.go:141] libmachine: Making call to close driver server
I1107 23:16:28.881620   24983 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:28.881820   24983 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:28.881839   24983 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-514284 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 6d1b4fd1b182d | 61.5MB |
| docker.io/library/mysql                 | 5.7                | 547b3c3c15a96 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-514284  | ea8bd19e3218a | 3.34kB |
| registry.k8s.io/kube-proxy              | v1.28.3            | bfc896cf80fba | 74.7MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | latest             | c20060033e06f | 191MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 5374347291230 | 127MB  |
| gcr.io/google-containers/addon-resizer  | functional-514284  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 10baa1ca17068 | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-514284 image ls --format table --alsologtostderr:
I1107 23:16:31.724590   25244 out.go:296] Setting OutFile to fd 1 ...
I1107 23:16:31.724921   25244 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:31.724941   25244 out.go:309] Setting ErrFile to fd 2...
I1107 23:16:31.724949   25244 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:31.725229   25244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
I1107 23:16:31.726009   25244 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:31.726159   25244 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:31.726718   25244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:31.726779   25244 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:31.740460   25244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40385
I1107 23:16:31.740868   25244 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:31.741398   25244 main.go:141] libmachine: Using API Version  1
I1107 23:16:31.741425   25244 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:31.741774   25244 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:31.741978   25244 main.go:141] libmachine: (functional-514284) Calling .GetState
I1107 23:16:31.743913   25244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:31.743959   25244 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:31.757673   25244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45341
I1107 23:16:31.758044   25244 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:31.758505   25244 main.go:141] libmachine: Using API Version  1
I1107 23:16:31.758543   25244 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:31.758919   25244 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:31.759077   25244 main.go:141] libmachine: (functional-514284) Calling .DriverName
I1107 23:16:31.759231   25244 ssh_runner.go:195] Run: systemctl --version
I1107 23:16:31.759268   25244 main.go:141] libmachine: (functional-514284) Calling .GetSSHHostname
I1107 23:16:31.762127   25244 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:31.762592   25244 main.go:141] libmachine: (functional-514284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:cb:80", ip: ""} in network mk-functional-514284: {Iface:virbr1 ExpiryTime:2023-11-08 00:13:05 +0000 UTC Type:0 Mac:52:54:00:ac:cb:80 Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:functional-514284 Clientid:01:52:54:00:ac:cb:80}
I1107 23:16:31.762621   25244 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined IP address 192.168.50.155 and MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:31.762767   25244 main.go:141] libmachine: (functional-514284) Calling .GetSSHPort
I1107 23:16:31.762924   25244 main.go:141] libmachine: (functional-514284) Calling .GetSSHKeyPath
I1107 23:16:31.763105   25244 main.go:141] libmachine: (functional-514284) Calling .GetSSHUsername
I1107 23:16:31.763279   25244 sshutil.go:53] new ssh client: &{IP:192.168.50.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/functional-514284/id_rsa Username:docker}
I1107 23:16:31.901681   25244 ssh_runner.go:195] Run: sudo crictl images --output json
I1107 23:16:32.003487   25244 main.go:141] libmachine: Making call to close driver server
I1107 23:16:32.003504   25244 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:32.003858   25244 main.go:141] libmachine: (functional-514284) DBG | Closing plugin on server side
I1107 23:16:32.003893   25244 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:32.003902   25244 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:16:32.003917   25244 main.go:141] libmachine: Making call to close driver server
I1107 23:16:32.003926   25244 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:32.004160   25244 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:32.004176   25244 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:16:32.004202   25244 main.go:141] libmachine: (functional-514284) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-514284 image ls --format json --alsologtostderr:
[{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6","repoDigests":["docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9","docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519576537"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b3
6e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"123188534"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529b
f982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"127165392"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/k
ube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"61498678"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647","repoDigests":["docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6","docker.io/library/nginx@sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"ffd4cfbbe753e624
19e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-514284"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"ea8bd19e3218a15da09
494aed2d9b27d140cd1e1c0e3f7666714c488a198fd15","repoDigests":["localhost/minikube-local-cache-test@sha256:e3763614abb83b54a4b36c529cf6c4e2f29cd79209fbdb0856924602c1755301"],"repoTags":["localhost/minikube-local-cache-test:functional-514284"],"size":"3343"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e47650
64ca3c45003de97eb8","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"74691991"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-514284 image ls --format json --alsologtostderr:
I1107 23:16:31.364683   25221 out.go:296] Setting OutFile to fd 1 ...
I1107 23:16:31.364853   25221 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:31.364867   25221 out.go:309] Setting ErrFile to fd 2...
I1107 23:16:31.364874   25221 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:31.365080   25221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
I1107 23:16:31.365678   25221 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:31.365789   25221 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:31.366163   25221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:31.366216   25221 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:31.380252   25221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32795
I1107 23:16:31.380680   25221 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:31.381287   25221 main.go:141] libmachine: Using API Version  1
I1107 23:16:31.381316   25221 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:31.381709   25221 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:31.381892   25221 main.go:141] libmachine: (functional-514284) Calling .GetState
I1107 23:16:31.383592   25221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:31.383630   25221 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:31.397583   25221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
I1107 23:16:31.397954   25221 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:31.398349   25221 main.go:141] libmachine: Using API Version  1
I1107 23:16:31.398367   25221 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:31.398712   25221 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:31.398865   25221 main.go:141] libmachine: (functional-514284) Calling .DriverName
I1107 23:16:31.399027   25221 ssh_runner.go:195] Run: systemctl --version
I1107 23:16:31.399052   25221 main.go:141] libmachine: (functional-514284) Calling .GetSSHHostname
I1107 23:16:31.401747   25221 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:31.402141   25221 main.go:141] libmachine: (functional-514284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:cb:80", ip: ""} in network mk-functional-514284: {Iface:virbr1 ExpiryTime:2023-11-08 00:13:05 +0000 UTC Type:0 Mac:52:54:00:ac:cb:80 Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:functional-514284 Clientid:01:52:54:00:ac:cb:80}
I1107 23:16:31.402178   25221 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined IP address 192.168.50.155 and MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:31.402311   25221 main.go:141] libmachine: (functional-514284) Calling .GetSSHPort
I1107 23:16:31.402499   25221 main.go:141] libmachine: (functional-514284) Calling .GetSSHKeyPath
I1107 23:16:31.402658   25221 main.go:141] libmachine: (functional-514284) Calling .GetSSHUsername
I1107 23:16:31.402823   25221 sshutil.go:53] new ssh client: &{IP:192.168.50.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/functional-514284/id_rsa Username:docker}
I1107 23:16:31.522471   25221 ssh_runner.go:195] Run: sudo crictl images --output json
I1107 23:16:31.662508   25221 main.go:141] libmachine: Making call to close driver server
I1107 23:16:31.662525   25221 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:31.662824   25221 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:31.662843   25221 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:16:31.662849   25221 main.go:141] libmachine: (functional-514284) DBG | Closing plugin on server side
I1107 23:16:31.662860   25221 main.go:141] libmachine: Making call to close driver server
I1107 23:16:31.662876   25221 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:31.663107   25221 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:31.663123   25221 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:16:31.663155   25221 main.go:141] libmachine: (functional-514284) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-514284 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ea8bd19e3218a15da09494aed2d9b27d140cd1e1c0e3f7666714c488a198fd15
repoDigests:
- localhost/minikube-local-cache-test@sha256:e3763614abb83b54a4b36c529cf6c4e2f29cd79209fbdb0856924602c1755301
repoTags:
- localhost/minikube-local-cache-test:functional-514284
size: "3343"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
repoDigests:
- docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6
- docker.io/library/nginx@sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6
repoDigests:
- docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9
- docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d
repoTags:
- docker.io/library/mysql:5.7
size: "519576537"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-514284
size: "34114467"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-514284 image ls --format yaml --alsologtostderr:
I1107 23:16:28.945027   25007 out.go:296] Setting OutFile to fd 1 ...
I1107 23:16:28.945280   25007 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:28.945290   25007 out.go:309] Setting ErrFile to fd 2...
I1107 23:16:28.945294   25007 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:28.945543   25007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
I1107 23:16:28.946135   25007 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:28.946245   25007 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:28.946633   25007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:28.946685   25007 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:28.960306   25007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34669
I1107 23:16:28.960705   25007 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:28.961255   25007 main.go:141] libmachine: Using API Version  1
I1107 23:16:28.961281   25007 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:28.961570   25007 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:28.961745   25007 main.go:141] libmachine: (functional-514284) Calling .GetState
I1107 23:16:28.963596   25007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:28.963632   25007 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:28.977631   25007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
I1107 23:16:28.978021   25007 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:28.978422   25007 main.go:141] libmachine: Using API Version  1
I1107 23:16:28.978443   25007 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:28.978741   25007 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:28.978949   25007 main.go:141] libmachine: (functional-514284) Calling .DriverName
I1107 23:16:28.979121   25007 ssh_runner.go:195] Run: systemctl --version
I1107 23:16:28.979144   25007 main.go:141] libmachine: (functional-514284) Calling .GetSSHHostname
I1107 23:16:28.981590   25007 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:28.981925   25007 main.go:141] libmachine: (functional-514284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:cb:80", ip: ""} in network mk-functional-514284: {Iface:virbr1 ExpiryTime:2023-11-08 00:13:05 +0000 UTC Type:0 Mac:52:54:00:ac:cb:80 Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:functional-514284 Clientid:01:52:54:00:ac:cb:80}
I1107 23:16:28.981957   25007 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined IP address 192.168.50.155 and MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:28.982084   25007 main.go:141] libmachine: (functional-514284) Calling .GetSSHPort
I1107 23:16:28.982256   25007 main.go:141] libmachine: (functional-514284) Calling .GetSSHKeyPath
I1107 23:16:28.982420   25007 main.go:141] libmachine: (functional-514284) Calling .GetSSHUsername
I1107 23:16:28.982529   25007 sshutil.go:53] new ssh client: &{IP:192.168.50.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/functional-514284/id_rsa Username:docker}
I1107 23:16:29.071878   25007 ssh_runner.go:195] Run: sudo crictl images --output json
I1107 23:16:29.115578   25007 main.go:141] libmachine: Making call to close driver server
I1107 23:16:29.115596   25007 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:29.115857   25007 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:29.115875   25007 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:16:29.115888   25007 main.go:141] libmachine: Making call to close driver server
I1107 23:16:29.115879   25007 main.go:141] libmachine: (functional-514284) DBG | Closing plugin on server side
I1107 23:16:29.115899   25007 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:29.116099   25007 main.go:141] libmachine: (functional-514284) DBG | Closing plugin on server side
I1107 23:16:29.116115   25007 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:29.116131   25007 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-514284 ssh pgrep buildkitd: exit status 1 (216.702441ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image build -t localhost/my-image:functional-514284 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 image build -t localhost/my-image:functional-514284 testdata/build --alsologtostderr: (6.443409621s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-514284 image build -t localhost/my-image:functional-514284 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 271e4d3287d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-514284
--> 3bb6ba20eb5
Successfully tagged localhost/my-image:functional-514284
3bb6ba20eb579754112f464548af098d3acde654f6670f42d4a226f51e4c7a03
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-514284 image build -t localhost/my-image:functional-514284 testdata/build --alsologtostderr:
I1107 23:16:29.392439   25090 out.go:296] Setting OutFile to fd 1 ...
I1107 23:16:29.392606   25090 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:29.392617   25090 out.go:309] Setting ErrFile to fd 2...
I1107 23:16:29.392622   25090 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:16:29.392787   25090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
I1107 23:16:29.393400   25090 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:29.393890   25090 config.go:182] Loaded profile config "functional-514284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:16:29.394255   25090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:29.394328   25090 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:29.408390   25090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
I1107 23:16:29.408784   25090 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:29.409287   25090 main.go:141] libmachine: Using API Version  1
I1107 23:16:29.409304   25090 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:29.409636   25090 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:29.409793   25090 main.go:141] libmachine: (functional-514284) Calling .GetState
I1107 23:16:29.411523   25090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1107 23:16:29.411561   25090 main.go:141] libmachine: Launching plugin server for driver kvm2
I1107 23:16:29.424967   25090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
I1107 23:16:29.425331   25090 main.go:141] libmachine: () Calling .GetVersion
I1107 23:16:29.425761   25090 main.go:141] libmachine: Using API Version  1
I1107 23:16:29.425785   25090 main.go:141] libmachine: () Calling .SetConfigRaw
I1107 23:16:29.426064   25090 main.go:141] libmachine: () Calling .GetMachineName
I1107 23:16:29.426232   25090 main.go:141] libmachine: (functional-514284) Calling .DriverName
I1107 23:16:29.426420   25090 ssh_runner.go:195] Run: systemctl --version
I1107 23:16:29.426442   25090 main.go:141] libmachine: (functional-514284) Calling .GetSSHHostname
I1107 23:16:29.428945   25090 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:29.429354   25090 main.go:141] libmachine: (functional-514284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:cb:80", ip: ""} in network mk-functional-514284: {Iface:virbr1 ExpiryTime:2023-11-08 00:13:05 +0000 UTC Type:0 Mac:52:54:00:ac:cb:80 Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:functional-514284 Clientid:01:52:54:00:ac:cb:80}
I1107 23:16:29.429384   25090 main.go:141] libmachine: (functional-514284) DBG | domain functional-514284 has defined IP address 192.168.50.155 and MAC address 52:54:00:ac:cb:80 in network mk-functional-514284
I1107 23:16:29.429513   25090 main.go:141] libmachine: (functional-514284) Calling .GetSSHPort
I1107 23:16:29.429667   25090 main.go:141] libmachine: (functional-514284) Calling .GetSSHKeyPath
I1107 23:16:29.429806   25090 main.go:141] libmachine: (functional-514284) Calling .GetSSHUsername
I1107 23:16:29.429940   25090 sshutil.go:53] new ssh client: &{IP:192.168.50.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/functional-514284/id_rsa Username:docker}
I1107 23:16:29.515233   25090 build_images.go:151] Building image from path: /tmp/build.3339793807.tar
I1107 23:16:29.515293   25090 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1107 23:16:29.524764   25090 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3339793807.tar
I1107 23:16:29.529266   25090 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3339793807.tar: stat -c "%s %y" /var/lib/minikube/build/build.3339793807.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3339793807.tar': No such file or directory
I1107 23:16:29.529295   25090 ssh_runner.go:362] scp /tmp/build.3339793807.tar --> /var/lib/minikube/build/build.3339793807.tar (3072 bytes)
I1107 23:16:29.558744   25090 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3339793807
I1107 23:16:29.568289   25090 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3339793807 -xf /var/lib/minikube/build/build.3339793807.tar
I1107 23:16:29.577999   25090 crio.go:297] Building image: /var/lib/minikube/build/build.3339793807
I1107 23:16:29.578064   25090 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-514284 /var/lib/minikube/build/build.3339793807 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1107 23:16:35.756491   25090 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-514284 /var/lib/minikube/build/build.3339793807 --cgroup-manager=cgroupfs: (6.178402516s)
I1107 23:16:35.756554   25090 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3339793807
I1107 23:16:35.766650   25090 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3339793807.tar
I1107 23:16:35.777302   25090 build_images.go:207] Built localhost/my-image:functional-514284 from /tmp/build.3339793807.tar
I1107 23:16:35.777331   25090 build_images.go:123] succeeded building to: functional-514284
I1107 23:16:35.777336   25090 build_images.go:124] failed building to: 
I1107 23:16:35.777355   25090 main.go:141] libmachine: Making call to close driver server
I1107 23:16:35.777376   25090 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:35.777667   25090 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:35.777684   25090 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:16:35.777694   25090 main.go:141] libmachine: Making call to close driver server
I1107 23:16:35.777704   25090 main.go:141] libmachine: (functional-514284) Calling .Close
I1107 23:16:35.777898   25090 main.go:141] libmachine: Successfully made call to close driver server
I1107 23:16:35.777915   25090 main.go:141] libmachine: Making call to close connection to plugin binary
I1107 23:16:35.777916   25090 main.go:141] libmachine: (functional-514284) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.340926786s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-514284
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (25.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdany-port2315701229/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699398944085597364" to /tmp/TestFunctionalparallelMountCmdany-port2315701229/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699398944085597364" to /tmp/TestFunctionalparallelMountCmdany-port2315701229/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699398944085597364" to /tmp/TestFunctionalparallelMountCmdany-port2315701229/001/test-1699398944085597364
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.203937ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  7 23:15 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  7 23:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  7 23:15 test-1699398944085597364
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh cat /mount-9p/test-1699398944085597364
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-514284 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [aa8d3aee-61d0-4e76-b5e4-3d29067f2db2] Pending
helpers_test.go:344: "busybox-mount" [aa8d3aee-61d0-4e76-b5e4-3d29067f2db2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [aa8d3aee-61d0-4e76-b5e4-3d29067f2db2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [aa8d3aee-61d0-4e76-b5e4-3d29067f2db2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 23.022330728s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-514284 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdany-port2315701229/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (25.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image load --daemon gcr.io/google-containers/addon-resizer:functional-514284 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 image load --daemon gcr.io/google-containers/addon-resizer:functional-514284 --alsologtostderr: (2.437717084s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.676582037s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-514284
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image load --daemon gcr.io/google-containers/addon-resizer:functional-514284 --alsologtostderr
E1107 23:15:59.439057   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 image load --daemon gcr.io/google-containers/addon-resizer:functional-514284 --alsologtostderr: (6.672057366s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image save gcr.io/google-containers/addon-resizer:functional-514284 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 image save gcr.io/google-containers/addon-resizer:functional-514284 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.578669661s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image rm gcr.io/google-containers/addon-resizer:functional-514284 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.691657502s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdspecific-port2245455608/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.896797ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdspecific-port2245455608/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdspecific-port2245455608/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (5.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-514284
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 image save --daemon gcr.io/google-containers/addon-resizer:functional-514284 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 image save --daemon gcr.io/google-containers/addon-resizer:functional-514284 --alsologtostderr: (5.820757425s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-514284
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (5.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup673943501/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup673943501/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup673943501/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T" /mount1: exit status 1 (304.912193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-514284 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup673943501/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup673943501/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-514284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup673943501/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-514284 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-514284 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-vdj2c" [2be8b0d2-d40e-4dee-891c-9f14b382c21b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-vdj2c" [2be8b0d2-d40e-4dee-891c-9f14b382c21b] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.038631634s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "280.621413ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "60.642947ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "276.914318ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "57.702503ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 service list: (1.245958437s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-514284 service list -o json: (1.255857071s)
functional_test.go:1493: Took "1.255949484s" to run "out/minikube-linux-amd64 -p functional-514284 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.155:30493
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-514284 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.155:30493
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-514284
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-514284
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-514284
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (111.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-823610 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1107 23:17:00.880532   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:18:22.801450   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-823610 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m51.754648253s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (111.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823610 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-823610 addons enable ingress --alsologtostderr -v=5: (16.434798805s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823610 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-338474 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1107 23:22:04.360940   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-338474 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.065481975s)
--- PASS: TestJSONOutput/start/Command (70.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-338474 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-338474 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-338474 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-338474 --output=json --user=testUser: (7.109856817s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-916672 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-916672 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.497736ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a6d87225-1d70-42cb-991f-17a885811758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-916672] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"882dcafd-52ec-42fd-9348-16eddf161d9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"35aa1514-2829-4c3c-8eb5-82dcaffabfab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ec8c8ba9-aa34-48ab-baaa-dc42e335b48a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig"}}
	{"specversion":"1.0","id":"ac32cbcd-1be4-42d6-91cc-6c8a82f71e30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube"}}
	{"specversion":"1.0","id":"fd21c1cc-2ff1-44e4-a226-47d61a82796c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ca75b38e-7eee-48f4-84b8-254a78384283","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ef25a711-3f19-44fe-a13c-e2fd1acf583c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-916672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-916672
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.04s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-721491 --driver=kvm2  --container-runtime=crio
E1107 23:23:26.282043   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-721491 --driver=kvm2  --container-runtime=crio: (46.482417528s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-723670 --driver=kvm2  --container-runtime=crio
E1107 23:23:53.871493   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:53.876771   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:53.887063   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:53.907351   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:53.947647   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:54.027972   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:54.188382   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:54.508966   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:55.149491   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:56.429955   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:23:58.990947   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:24:04.111576   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:24:14.352759   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:24:34.833375   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-723670 --driver=kvm2  --container-runtime=crio: (47.903327699s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-721491
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-723670
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-723670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-723670
helpers_test.go:175: Cleaning up "first-721491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-721491
--- PASS: TestMinikubeProfile (97.04s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-445045 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-445045 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.967305816s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-445045 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-445045 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-460920 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1107 23:25:15.793927   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:25:38.956522   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:25:42.434113   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-460920 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.522603312s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-460920 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-460920 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-445045 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-460920 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-460920 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-460920
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-460920: (2.092939336s)
--- PASS: TestMountStart/serial/Stop (2.09s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-460920
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-460920: (21.256797681s)
--- PASS: TestMountStart/serial/RestartStopped (22.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-460920 ssh -- ls /minikube-host
E1107 23:26:10.122849   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-460920 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553062 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1107 23:26:37.714895   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553062 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.109307337s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-553062 -- rollout status deployment/busybox: (4.548157766s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-tvwc7 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-z67r2 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-tvwc7 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-z67r2 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-tvwc7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553062 -- exec busybox-5bc68d56bd-z67r2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.37s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-553062 -v 3 --alsologtostderr
E1107 23:28:53.871578   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-553062 -v 3 --alsologtostderr: (45.175901464s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.76s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp testdata/cp-test.txt multinode-553062:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp multinode-553062:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3094019046/001/cp-test_multinode-553062.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp multinode-553062:/home/docker/cp-test.txt multinode-553062-m02:/home/docker/cp-test_multinode-553062_multinode-553062-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m02 "sudo cat /home/docker/cp-test_multinode-553062_multinode-553062-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp multinode-553062:/home/docker/cp-test.txt multinode-553062-m03:/home/docker/cp-test_multinode-553062_multinode-553062-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m03 "sudo cat /home/docker/cp-test_multinode-553062_multinode-553062-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp testdata/cp-test.txt multinode-553062-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp multinode-553062-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3094019046/001/cp-test_multinode-553062-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp multinode-553062-m02:/home/docker/cp-test.txt multinode-553062:/home/docker/cp-test_multinode-553062-m02_multinode-553062.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062 "sudo cat /home/docker/cp-test_multinode-553062-m02_multinode-553062.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp multinode-553062-m02:/home/docker/cp-test.txt multinode-553062-m03:/home/docker/cp-test_multinode-553062-m02_multinode-553062-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m03 "sudo cat /home/docker/cp-test_multinode-553062-m02_multinode-553062-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp testdata/cp-test.txt multinode-553062-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp multinode-553062-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3094019046/001/cp-test_multinode-553062-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp multinode-553062-m03:/home/docker/cp-test.txt multinode-553062:/home/docker/cp-test_multinode-553062-m03_multinode-553062.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062 "sudo cat /home/docker/cp-test_multinode-553062-m03_multinode-553062.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 cp multinode-553062-m03:/home/docker/cp-test.txt multinode-553062-m02:/home/docker/cp-test_multinode-553062-m03_multinode-553062-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 ssh -n multinode-553062-m02 "sudo cat /home/docker/cp-test_multinode-553062-m03_multinode-553062-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-553062 node stop m03: (2.090775012s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553062 status: exit status 7 (437.6446ms)

                                                
                                                
-- stdout --
	multinode-553062
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-553062-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-553062-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553062 status --alsologtostderr: exit status 7 (440.118725ms)

                                                
                                                
-- stdout --
	multinode-553062
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-553062-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-553062-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:29:09.397687   32656 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:29:09.397910   32656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:29:09.397918   32656 out.go:309] Setting ErrFile to fd 2...
	I1107 23:29:09.397922   32656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:29:09.398109   32656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1107 23:29:09.398260   32656 out.go:303] Setting JSON to false
	I1107 23:29:09.398295   32656 mustload.go:65] Loading cluster: multinode-553062
	I1107 23:29:09.398404   32656 notify.go:220] Checking for updates...
	I1107 23:29:09.398715   32656 config.go:182] Loaded profile config "multinode-553062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:29:09.398732   32656 status.go:255] checking status of multinode-553062 ...
	I1107 23:29:09.399228   32656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:29:09.399291   32656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:29:09.415687   32656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33379
	I1107 23:29:09.416074   32656 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:29:09.416638   32656 main.go:141] libmachine: Using API Version  1
	I1107 23:29:09.416659   32656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:29:09.417092   32656 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:29:09.417312   32656 main.go:141] libmachine: (multinode-553062) Calling .GetState
	I1107 23:29:09.418921   32656 status.go:330] multinode-553062 host status = "Running" (err=<nil>)
	I1107 23:29:09.418938   32656 host.go:66] Checking if "multinode-553062" exists ...
	I1107 23:29:09.419207   32656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:29:09.419245   32656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:29:09.433599   32656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I1107 23:29:09.433951   32656 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:29:09.434436   32656 main.go:141] libmachine: Using API Version  1
	I1107 23:29:09.434453   32656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:29:09.434755   32656 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:29:09.434940   32656 main.go:141] libmachine: (multinode-553062) Calling .GetIP
	I1107 23:29:09.437853   32656 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:29:09.438231   32656 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:29:09.438265   32656 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:29:09.438382   32656 host.go:66] Checking if "multinode-553062" exists ...
	I1107 23:29:09.438716   32656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:29:09.438766   32656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:29:09.452342   32656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I1107 23:29:09.452676   32656 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:29:09.453136   32656 main.go:141] libmachine: Using API Version  1
	I1107 23:29:09.453158   32656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:29:09.453471   32656 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:29:09.453629   32656 main.go:141] libmachine: (multinode-553062) Calling .DriverName
	I1107 23:29:09.453817   32656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:29:09.453845   32656 main.go:141] libmachine: (multinode-553062) Calling .GetSSHHostname
	I1107 23:29:09.456421   32656 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:29:09.456835   32656 main.go:141] libmachine: (multinode-553062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:51:99", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:26:27 +0000 UTC Type:0 Mac:52:54:00:a6:51:99 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-553062 Clientid:01:52:54:00:a6:51:99}
	I1107 23:29:09.456863   32656 main.go:141] libmachine: (multinode-553062) DBG | domain multinode-553062 has defined IP address 192.168.39.246 and MAC address 52:54:00:a6:51:99 in network mk-multinode-553062
	I1107 23:29:09.456993   32656 main.go:141] libmachine: (multinode-553062) Calling .GetSSHPort
	I1107 23:29:09.457128   32656 main.go:141] libmachine: (multinode-553062) Calling .GetSSHKeyPath
	I1107 23:29:09.457254   32656 main.go:141] libmachine: (multinode-553062) Calling .GetSSHUsername
	I1107 23:29:09.457362   32656 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062/id_rsa Username:docker}
	I1107 23:29:09.552578   32656 ssh_runner.go:195] Run: systemctl --version
	I1107 23:29:09.559107   32656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:29:09.573512   32656 kubeconfig.go:92] found "multinode-553062" server: "https://192.168.39.246:8443"
	I1107 23:29:09.573538   32656 api_server.go:166] Checking apiserver status ...
	I1107 23:29:09.573570   32656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:29:09.585388   32656 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1071/cgroup
	I1107 23:29:09.598849   32656 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/podcf3161d745dce4ca9e35cf659a0b5ec9/crio-2f60917d7ee76f8bc0991de243fd8a9da27aa228911b0518d31060209519367b"
	I1107 23:29:09.598908   32656 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcf3161d745dce4ca9e35cf659a0b5ec9/crio-2f60917d7ee76f8bc0991de243fd8a9da27aa228911b0518d31060209519367b/freezer.state
	I1107 23:29:09.608086   32656 api_server.go:204] freezer state: "THAWED"
	I1107 23:29:09.608108   32656 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I1107 23:29:09.612712   32656 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I1107 23:29:09.612738   32656 status.go:421] multinode-553062 apiserver status = Running (err=<nil>)
	I1107 23:29:09.612750   32656 status.go:257] multinode-553062 status: &{Name:multinode-553062 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:29:09.612771   32656 status.go:255] checking status of multinode-553062-m02 ...
	I1107 23:29:09.613072   32656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:29:09.613110   32656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:29:09.627146   32656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I1107 23:29:09.627535   32656 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:29:09.627989   32656 main.go:141] libmachine: Using API Version  1
	I1107 23:29:09.628012   32656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:29:09.628274   32656 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:29:09.628433   32656 main.go:141] libmachine: (multinode-553062-m02) Calling .GetState
	I1107 23:29:09.629885   32656 status.go:330] multinode-553062-m02 host status = "Running" (err=<nil>)
	I1107 23:29:09.629902   32656 host.go:66] Checking if "multinode-553062-m02" exists ...
	I1107 23:29:09.630229   32656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:29:09.630263   32656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:29:09.644346   32656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45941
	I1107 23:29:09.644722   32656 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:29:09.645175   32656 main.go:141] libmachine: Using API Version  1
	I1107 23:29:09.645196   32656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:29:09.645482   32656 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:29:09.645661   32656 main.go:141] libmachine: (multinode-553062-m02) Calling .GetIP
	I1107 23:29:09.648157   32656 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:29:09.648528   32656 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:29:09.648790   32656 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:29:09.648915   32656 host.go:66] Checking if "multinode-553062-m02" exists ...
	I1107 23:29:09.649396   32656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:29:09.649434   32656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:29:09.663889   32656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I1107 23:29:09.664242   32656 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:29:09.664621   32656 main.go:141] libmachine: Using API Version  1
	I1107 23:29:09.664646   32656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:29:09.664995   32656 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:29:09.665172   32656 main.go:141] libmachine: (multinode-553062-m02) Calling .DriverName
	I1107 23:29:09.665347   32656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:29:09.665368   32656 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHHostname
	I1107 23:29:09.668069   32656 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:29:09.668411   32656 main.go:141] libmachine: (multinode-553062-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:ff:75", ip: ""} in network mk-multinode-553062: {Iface:virbr1 ExpiryTime:2023-11-08 00:27:34 +0000 UTC Type:0 Mac:52:54:00:49:ff:75 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-553062-m02 Clientid:01:52:54:00:49:ff:75}
	I1107 23:29:09.668435   32656 main.go:141] libmachine: (multinode-553062-m02) DBG | domain multinode-553062-m02 has defined IP address 192.168.39.137 and MAC address 52:54:00:49:ff:75 in network mk-multinode-553062
	I1107 23:29:09.668607   32656 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHPort
	I1107 23:29:09.668767   32656 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHKeyPath
	I1107 23:29:09.668949   32656 main.go:141] libmachine: (multinode-553062-m02) Calling .GetSSHUsername
	I1107 23:29:09.669110   32656 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17585-9647/.minikube/machines/multinode-553062-m02/id_rsa Username:docker}
	I1107 23:29:09.752270   32656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:29:09.765099   32656 status.go:257] multinode-553062-m02 status: &{Name:multinode-553062-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:29:09.765130   32656 status.go:255] checking status of multinode-553062-m03 ...
	I1107 23:29:09.765431   32656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1107 23:29:09.765480   32656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1107 23:29:09.780225   32656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43661
	I1107 23:29:09.780626   32656 main.go:141] libmachine: () Calling .GetVersion
	I1107 23:29:09.781141   32656 main.go:141] libmachine: Using API Version  1
	I1107 23:29:09.781164   32656 main.go:141] libmachine: () Calling .SetConfigRaw
	I1107 23:29:09.781579   32656 main.go:141] libmachine: () Calling .GetMachineName
	I1107 23:29:09.781742   32656 main.go:141] libmachine: (multinode-553062-m03) Calling .GetState
	I1107 23:29:09.783234   32656 status.go:330] multinode-553062-m03 host status = "Stopped" (err=<nil>)
	I1107 23:29:09.783249   32656 status.go:343] host is not running, skipping remaining checks
	I1107 23:29:09.783256   32656 status.go:257] multinode-553062-m03 status: &{Name:multinode-553062-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.97s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 node start m03 --alsologtostderr
E1107 23:29:21.556922   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-553062 node start m03 --alsologtostderr: (31.108224624s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-553062 node delete m03: (1.221978376s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (445.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553062 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1107 23:43:53.871245   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:45:38.957158   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:45:42.435895   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
E1107 23:48:42.003703   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:48:53.871476   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1107 23:50:38.956839   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1107 23:50:42.436368   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553062 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m24.56775659s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553062 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (445.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-553062
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553062-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-553062-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (80.981111ms)

                                                
                                                
-- stdout --
	* [multinode-553062-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-553062-m02' is duplicated with machine name 'multinode-553062-m02' in profile 'multinode-553062'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553062-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553062-m03 --driver=kvm2  --container-runtime=crio: (47.664807319s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-553062
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-553062: exit status 80 (235.889202ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-553062
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-553062-m03 already exists in multinode-553062-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-553062-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.80s)

                                                
                                    
x
+
TestScheduledStopUnix (120.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-153425 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-153425 --memory=2048 --driver=kvm2  --container-runtime=crio: (48.753200973s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-153425 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-153425 -n scheduled-stop-153425
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-153425 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-153425 --cancel-scheduled
E1107 23:56:56.919189   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-153425 -n scheduled-stop-153425
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-153425
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-153425 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-153425
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-153425: exit status 7 (72.728394ms)

                                                
                                                
-- stdout --
	scheduled-stop-153425
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-153425 -n scheduled-stop-153425
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-153425 -n scheduled-stop-153425: exit status 7 (72.63347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-153425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-153425
--- PASS: TestScheduledStopUnix (120.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (197.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-161055 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-161055 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m51.534026534s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-161055
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-161055: (2.112034173s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-161055 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-161055 status --format={{.Host}}: exit status 7 (90.405608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-161055 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-161055 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.809276343s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-161055 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-161055 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-161055 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (125.13059ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-161055] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-161055
	    minikube start -p kubernetes-upgrade-161055 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1610552 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-161055 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-161055 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-161055 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (32.262954219s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-161055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-161055
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-161055: (1.104941604s)
--- PASS: TestKubernetesUpgrade (197.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-798084 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-798084 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (90.058302ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-798084] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestPause/serial/Start (105.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-036330 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-036330 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.220787634s)
--- PASS: TestPause/serial/Start (105.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (110.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-798084 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-798084 --driver=kvm2  --container-runtime=crio: (1m50.134046994s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-798084 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (110.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-798084 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-798084 --no-kubernetes --driver=kvm2  --container-runtime=crio: (5.814092258s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-798084 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-798084 status -o json: exit status 2 (280.333906ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-798084","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-798084
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-798084: (1.050959178s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-798084 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-798084 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.073867988s)
--- PASS: TestNoKubernetes/serial/Start (29.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-798084 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-798084 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.527053ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-798084
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-798084: (1.36441428s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-798084 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-798084 --driver=kvm2  --container-runtime=crio: (45.931529237s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-010870 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-010870 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.838361ms)

                                                
                                                
-- stdout --
	* [false-010870] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 00:00:44.109055   43581 out.go:296] Setting OutFile to fd 1 ...
	I1108 00:00:44.109168   43581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:00:44.109179   43581 out.go:309] Setting ErrFile to fd 2...
	I1108 00:00:44.109186   43581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 00:00:44.109375   43581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9647/.minikube/bin
	I1108 00:00:44.109963   43581 out.go:303] Setting JSON to false
	I1108 00:00:44.110876   43581 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6193,"bootTime":1699395451,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 00:00:44.110936   43581 start.go:138] virtualization: kvm guest
	I1108 00:00:44.113277   43581 out.go:177] * [false-010870] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 00:00:44.114813   43581 out.go:177]   - MINIKUBE_LOCATION=17585
	I1108 00:00:44.114856   43581 notify.go:220] Checking for updates...
	I1108 00:00:44.117310   43581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 00:00:44.118587   43581 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9647/kubeconfig
	I1108 00:00:44.119950   43581 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9647/.minikube
	I1108 00:00:44.121298   43581 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 00:00:44.122596   43581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 00:00:44.124231   43581 config.go:182] Loaded profile config "NoKubernetes-798084": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1108 00:00:44.124363   43581 config.go:182] Loaded profile config "force-systemd-env-420594": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 00:00:44.124461   43581 config.go:182] Loaded profile config "running-upgrade-802871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1108 00:00:44.124546   43581 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 00:00:44.160052   43581 out.go:177] * Using the kvm2 driver based on user configuration
	I1108 00:00:44.161392   43581 start.go:298] selected driver: kvm2
	I1108 00:00:44.161405   43581 start.go:902] validating driver "kvm2" against <nil>
	I1108 00:00:44.161415   43581 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 00:00:44.163308   43581 out.go:177] 
	W1108 00:00:44.164575   43581 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1108 00:00:44.165865   43581 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-010870 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-010870" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-010870

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010870"

                                                
                                                
----------------------- debugLogs end: false-010870 [took: 3.052760085s] --------------------------------
helpers_test.go:175: Cleaning up "false-010870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-010870
--- PASS: TestNetworkPlugins/group/false (3.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-798084 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-798084 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.368772ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (172.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-590541 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-590541 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m52.9854147s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (172.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (120.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-320390 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-320390 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (2m0.573177366s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (120.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-590541 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8fcfe5d9-c24e-4e75-94f0-c14040dbbff3] Pending
helpers_test.go:344: "busybox" [8fcfe5d9-c24e-4e75-94f0-c14040dbbff3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8fcfe5d9-c24e-4e75-94f0-c14040dbbff3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.026788306s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-590541 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-590541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1108 00:05:22.004404   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-590541 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-253253 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1108 00:05:38.956385   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1108 00:05:42.434029   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-253253 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m1.273956226s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-320390 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e8b1b9af-ef0e-4bf8-8f4e-bcdc804dfd7c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e8b1b9af-ef0e-4bf8-8f4e-bcdc804dfd7c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.031731926s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-320390 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-253253 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e61d4d4d-56d1-4cc2-ba20-16be8f2b57f5] Pending
helpers_test.go:344: "busybox" [e61d4d4d-56d1-4cc2-ba20-16be8f2b57f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e61d4d4d-56d1-4cc2-ba20-16be8f2b57f5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.031534785s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-253253 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-320390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-320390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.098747365s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-320390 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-253253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-253253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.097304161s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-253253 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-688874
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-039263 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-039263 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m39.485334089s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (791.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-590541 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-590541 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m10.918616297s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-590541 -n old-k8s-version-590541
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (791.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (572.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-320390 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-320390 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (9m32.027241493s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320390 -n no-preload-320390
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (572.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-039263 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [86fb6a2f-8f76-472e-99e3-5adfdc82cc4d] Pending
helpers_test.go:344: "busybox" [86fb6a2f-8f76-472e-99e3-5adfdc82cc4d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [86fb6a2f-8f76-472e-99e3-5adfdc82cc4d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.027088838s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-039263 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (564.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-253253 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-253253 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (9m24.338011454s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253253 -n embed-certs-253253
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (564.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-039263 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-039263 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (424.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-039263 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1108 00:13:36.919955   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1108 00:13:53.872128   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
E1108 00:15:38.957129   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
E1108 00:15:42.434029   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-039263 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (7m4.654587252s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (424.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-409933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-409933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (1m2.550560275s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-409933 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-409933 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.576969123s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-409933 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-409933 --alsologtostderr -v=3: (10.459292345s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-409933 -n newest-cni-409933
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-409933 -n newest-cni-409933: exit status 7 (85.684622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-409933 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (53.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-409933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3
E1108 00:33:53.871322   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/ingress-addon-legacy-823610/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-409933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.3: (53.054906097s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-409933 -n newest-cni-409933
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (53.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-409933 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-409933 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-409933 -n newest-cni-409933
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-409933 -n newest-cni-409933: exit status 2 (277.075191ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-409933 -n newest-cni-409933
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-409933 -n newest-cni-409933: exit status 2 (287.099919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-409933 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-409933 -n newest-cni-409933
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-409933 -n newest-cni-409933
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m9.818187576s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (102.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1108 00:35:09.565904   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:09.571343   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:09.581622   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:09.601904   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:09.642230   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:09.723052   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:09.884232   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:10.204671   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:10.845288   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:12.126243   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:14.687337   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:19.807556   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:35:30.047940   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m42.13484976s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (102.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (103.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m43.719025956s)
--- PASS: TestNetworkPlugins/group/calico/Start (103.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-010870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-010870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pt9xw" [6ca2cc4c-a2eb-4b12-98fa-3be1f417ec47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 00:35:42.433772   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-pt9xw" [6ca2cc4c-a2eb-4b12-98fa-3be1f417ec47] Running
E1108 00:35:50.528900   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.016363394s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-010870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m31.852256261s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7tljf" [287e6152-8959-4f41-910a-cecb8d15d9fa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.024537697s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-010870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-010870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n2fx5" [e85ee12a-b7a5-4522-870b-4ef87a79b833] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n2fx5" [e85ee12a-b7a5-4522-870b-4ef87a79b833] Running
E1108 00:36:29.143241   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:36:29.148495   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:36:29.158760   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:36:29.179062   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:36:29.219329   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:36:29.299643   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:36:29.460055   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:36:29.780718   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:36:30.420995   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:36:31.489381   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
E1108 00:36:31.701752   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.022345589s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-010870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (101.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m41.769099876s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (101.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-039263 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-039263 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263: exit status 2 (293.023546ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263: exit status 2 (264.30746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-039263 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-039263 -n default-k8s-diff-port-039263
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (99.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m39.619653137s)
--- PASS: TestNetworkPlugins/group/flannel/Start (99.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-pxznz" [038f274e-51bf-4dc7-b7f2-20de267f3b4e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.040113345s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-010870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-010870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9b87s" [f04d4051-0465-496a-8808-3ba4f5846c8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9b87s" [f04d4051-0465-496a-8808-3ba4f5846c8a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.012955895s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-010870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-010870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-010870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nq275" [64fd8cd6-5459-4909-8a28-5cf466d638fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nq275" [64fd8cd6-5459-4909-8a28-5cf466d638fd] Running
E1108 00:37:51.064917   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
E1108 00:37:53.411463   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/old-k8s-version-590541/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.022460439s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-010870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-010870 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m3.930762233s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-010870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-010870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8j57n" [f2c620ff-7b51-4d35-b4a6-80937d761b9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 00:38:42.006017   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/addons-245409/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-8j57n" [f2c620ff-7b51-4d35-b4a6-80937d761b9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.012369847s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-010870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tcl9d" [cb49ba7c-cf33-469e-a70f-2977ae5fd0bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.025975974s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-010870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-010870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pnqs7" [dbebef7e-5ead-4aa4-964b-1f18d02fef28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pnqs7" [dbebef7e-5ead-4aa4-964b-1f18d02fef28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.019331385s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-010870 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-010870 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wqgkb" [8686cad4-310a-4a7b-8082-23eefea284b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wqgkb" [8686cad4-310a-4a7b-8082-23eefea284b5] Running
E1108 00:39:12.985186   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/no-preload-320390/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.016388673s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-010870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (33.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-010870 exec deployment/netcat -- nslookup kubernetes.default
E1108 00:39:17.188312   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:17.193559   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:17.203850   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:17.224104   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:17.264387   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:17.344742   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:17.505190   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:17.825771   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:18.466668   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:19.747276   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:22.307966   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
E1108 00:39:27.429098   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-010870 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.177804512s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-010870 exec deployment/netcat -- nslookup kubernetes.default
E1108 00:39:37.669305   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/default-k8s-diff-port-039263/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-010870 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.165213677s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-010870 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (33.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-010870 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (36/294)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.3/cached-images 0
13 TestDownloadOnly/v1.28.3/binaries 0
14 TestDownloadOnly/v1.28.3/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
44 TestDockerFlags 0
47 TestDockerEnvContainerd 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/DockerEnv 0
102 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
150 TestGvisorAddon 0
151 TestImageBuild 0
184 TestKicCustomNetwork 0
185 TestKicExistingNetwork 0
186 TestKicCustomSubnet 0
187 TestKicStaticIP 0
218 TestChangeNoneUser 0
221 TestScheduledStopWindows 0
223 TestSkaffold 0
225 TestInsufficientStorage 0
229 TestMissingContainerUpgrade 0
237 TestStartStop/group/disable-driver-mounts 0.15
251 TestNetworkPlugins/group/kubenet 3.16
259 TestNetworkPlugins/group/cilium 7.39
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-560216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-560216
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E1108 00:00:42.433986   16848 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9647/.minikube/profiles/functional-514284/client.crt: no such file or directory
panic.go:523: 
----------------------- debugLogs start: kubenet-010870 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-010870" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-010870

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010870"

                                                
                                                
----------------------- debugLogs end: kubenet-010870 [took: 3.013488233s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-010870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-010870
--- SKIP: TestNetworkPlugins/group/kubenet (3.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-010870 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-010870" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-010870

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-010870" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010870"

                                                
                                                
----------------------- debugLogs end: cilium-010870 [took: 6.894912155s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-010870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-010870
--- SKIP: TestNetworkPlugins/group/cilium (7.39s)

                                                
                                    
Copied to clipboard